Review of BLM: The Making of a New Marxist Revolution

The first chapter of BLM: The Making of a New Marxist Revolution titled The Founding v. Slavery, author Mike Gonzalez presents a summary refutation of several important claims linked to the 1619 Project. This rebuttal citing numerous subject area experts and primary sources highlights how the overarching argument put forward by texts which were later disseminated along with lesson plans to schools was a weapon with which to indoctrinate and not a historical work with which to instruct honestly about the past. This section contrasting Commentary and History sets the tone for subsequent chapters, highlighting how the former is often shaped for use as a weapon by the left and the latter is an aegis with which to defend society. While Mike Gonzalez does overlook several important elements about BLM, this book is truly a masterful accounting of how much of the rhetoric used as a cudgel by BLM activists are variations of what was said before by Soviet-inspired Communist activists in the U.S.

The second chapter, The Soviets’ Failed Infiltration, details the period following shortly after the 1917 Revolution in Russia. During this period of cultural renaissance for Blacks in Harlem, efforts were made by Communists to claim themselves as the true representative of the political interests of Blacks. Communists and fellow travelers defined themselves in opposition to other popular, capitalist movements, such as Marcus Garvey’s Universal Negro Improvement Associate. And yet they repeatedly sought to infiltrate it and gain control over this organization and others like it. This strategy along with many of the policy positions – such as the separation of the Southern portion of the United States under a separate government called New Africa – taken by these groups often originated from deliberative bodies in Moscow.

As Harold Cruse details at greater length in The Crisis of the Negro Intellectual, these were not tactics and policies able to mobilize many rank-and-file workers or activists. Outside of a few committed bodies of radical cadres that better established black civil organizations shunned, it merely had the effect of – to paraphrase Cruse – aesthetically and intellectually castrating many potentially brilliant minds. Gonzalez continues by highlighting the close relationship between organizations such as the Black Liberation Army and International Labor Defense, a Communist Party front organization, and Cuba – which was actively collaborating with the Soviet Union. The goal of these groups was to foment racial strife and dissension in manners that would benefit the interest of both of these parties (Domestic and international Communist Parties). From a regional perspective Hammer and Hoe: Alabama Communists During the Great Depression describes this politics in more detail – and how leading up to and during the Second World War the Soviet Union discouraged this behavior so that the Communists were recognized as “supporters” of the fight against Nazi Germany.

The third chapter, Then The 1960s Happened, covers the period after the War and the end of the Popular Front. It shows a political return to anti-racist discourse, as well as a new focus on anti-sexism and anti-imperialism, understood to also relate to the experience of blacks in America. Stockley Carmichael, former leader of SNCC, and a pan-African communist, is invited to Cuba after promoting this view in London and later tells audiences in Havana that they are preparing urban guerrilla forces.

Highlighting the intellectual linkages between these events from the 1960s and the present, Gonzalez cites SNCC’s letter which functioned to pass the torch from them to BLM and highlights how: “Opal Tometi, known for touring Caracas and praising Nicolas Maduro’s dictatorship in exchange for his support, is hardly the first radical black leader to tour the Caribbean in search of like-minded dictators.” (Gonzalez 63). After highlighting how the Cuban Revolution was a model which inspired the Weathermen Underground leftist terrorist group, how the Cuban government provided assistance and sanctuary to Black Liberation Army members that were wanted for major crimes, and how members of these organizations have since turned from armed conflict to subversion, the reader enters the near-present.

At 25 pages Chapter 4, titled BLM, is the book’s shortest chapter. It does, however, highlight how BLM is part of a decentralized transnational network of activists who seek to develop revolutionary conditions within the U.S. through policy initiatives, organized conflict, and positive media coverage. Gonzalez cites  Armed Conflict Location and Event Data (ACLED) and Bridging Divides Initiative (BDI) research which shows that BLM was involved in 95% of the then 633 incidents recently coded as riots in the U.S and claimed they were one of the main factors for the “heightened risk of political violence and instability going into the 2020 election”. Brief biographies are given of the founders and a few leaders of BLM, along with their background on their religious upbringing and family history. Gonzalez describes how Opal Tometi, who comes from a Liberation Theology background, wrote “something akin to a manifesto titled “Black North American Solidarity Statement with the Venezuelan People” (Gonzalez 85). Several other examples showing BLM’s linkages to the international Communist movement are also shared.

This section is where my main criticism of the book emerged –  there is no identification of the fact that these individuals and many of the groups Gonzalez cites – i.e. Causa Justa, FRSO, PUEBLO, etc. – participated in the United States Social Forum. It’s one thing to say that these are people for whom “Maduro is a model to follow in the United States” (Gonzalez 94). It is a whole other thing to use intelligence processes and products to highlight how these people participated in organizational and strategic knowledge transfer events that were first ideated in Caracas at the World Social Forum and that had multiple Venezuelan government officials in attendance at these events.

This is important as it enables verification and expansion of significant conjectures made by Gonzalez – such as his claim that “given the great assortment of small and large Marxist associations that the three would call on [to promote BLM], we can quickly figure out how the hashtagged message was amplified and by whom.” (Gonzalez 95). Given Venezuela’s sympathetic view toward BLM, and Venezuela’s alliance with China and Russia, and that all three have social media operations to influence Twitter – this means that three states antagonistic to the U.S. government have the means, motive, and opportunity to support BLM in their online operations. This means that all the other groups which were part of the Social Forum had the means, motive, and opportunity to claim the BLM flag as their own. Both of these factors can be used to explain BLM’s “virality”. Regardless of this criticism, further relevant details of the official BLM network and its affiliate’s connections to the pan-Africanist movement are described, which then serves to transition to a discussion on the money involved.

Chapter 5, titled Follow The Money, illustrates how the fiscal sponsors of the movement have ties to long-established foundations and financial support networks which are led by people with past or present associations with Communist regimes in Beijing, Caracas, Havana, and Managua as well as older liberal organizations that have seemingly been captured. The citation of numerous amounts of money distributed is at times shocking. There are, however, no charts that show this and it’s not clear the methodology used – meaning there could be gaps between what’s said and what is actually raised. Because of this lack of charts and network maps – and this is a systemic problem within the subject area literature on leftist groups in the U.S. – it decreases the effectiveness of the intuitively correct claims made about how these networks are classifiable as 4th generation warfare actors. It also explains why criticism of BLM is more difficult than that of their primarily white allies, Antifa.

Chapter 6, How Antifa Became the Safe Space, highlights several issues such as elected politicians running interference in criminal investigations and district attorneys refusing to prosecute political cases. The examples given show that the Network Contagion Research Institute’s claim that “The need for regular, reliable and responsible reporting with methods such as those used in this briefing with similar computational techniques is now imperative.” is perhaps an understatement (Gonzalez 139). After all, without a comprehensive account of what’s going on at a national level, journalistic accounts are such “local” stories that can’t provide a full picture of how these financial and political support networks are able to impact society. Antifa, because of its lack of a clear organizational leadership structure, is able to be criticized because it’s not able to politically mobilize according to methods traditional to representative democracy.

Chapter 7, Schooling the Revolution, is very insightful for showing how it is that activists have been able to incorporate radical communist and race essentialist perspectives into instructional material. Through the use of networks affiliated with the Zinn Educational Project, and Black Lives Matter at School National Steering Committee, teachers are forced to go through training sessions skin to the Red Guard struggle sessions in Mao’s Cultural Revolution, and the curriculum transitions from subject area knowledge to the creation of “proper” political views. Gonzalez highlights “Former Weatherman Bill Ayer’s stomach-churning praise for Hugo Chavez’s communist indoctrination of Venezuelan children at a 2006 meeting in Caracas” and highlights how the promotion of sexual libertinism to children matches the work of George Lukacs, the former Educational and Cultural Commissioner of Soviet Hungary – but unfortunately doesn’t unpack this even more to cover how so many of the policies that he cites are verbatim those that have been implemented in Venezuela (Gonzalez 160).

On the whole, the book is very insightful in presenting a picture of the actual strategies, tactics, techniques and aims of the Black Lives Matter Movement. The scope of it’s organization isn’t holistic nor is the extent of its network affiliates and efforts fully mapped. However, as an advanced account of the organization and its affiliated activity, it’s a worthy contribution to the literature.

Notes from Knowledge Management in the Intelligence Enterprise

Notes from Knowledge Management in the Intelligence Enterprise

Knowledge Management in the Intelligence Enterprise

This book is about the application of knowledge management (KM) principles to the practice of intelligence to fulfill those consumers’ expectations.

Unfortunately, too many have reduced intelligence to a simple metaphor of “connecting the dots.” This process, it seems, appears all too simple after the fact—once you have seen the picture and you can ignore irrelevant, contradictory, and missing dots. Real-world intelligence is not a puzzle of connecting dots; it is the hard daily work of planning operations, focusing the collection of data, and then processing the collected data for deep analysis to produce a flow of knowledge for dissemination to a wide range of consumers.

this book… is an outgrowth of a 2-day military KM seminar that I teach in the United States to describe the methods to integrate people, processes, and technologies into knowledge- creating enterprises.

The book progresses from an introduction to KM applied to intelligence (Chapters 1 and 2) to the principles and processes of KM (Chapter 3). The characteristics of collaborative knowledge-based intelligence organizations are described (Chapter 4) before detailing its principle craft of analysis and synthesis (Chapter 5 introduces the principles and Chapter 6 illustrates the practice). The wide range of technology tools to support analytic thinking and allow analysts to interact with information is explained (Chapter 7) before describing the automated tools that perform all-source fusion and mining (Chapter 8). The organizational, systems, and technology concepts throughout the book are brought together in a representative intelligence enterprise (Chapter 9) to illustrate the process of architecture design for a small intelligence cell. An overview of core, enabling, and emerging KM technologies in this area is provided in conclusion (Chapter 10).

Knowledge Management and Intelligence

This is a book about the management of knowledge to produce and deliver a special kind of knowledge: intelligence—that knowledge that is deemed most critical for decision making both in the nation-state and in business.

  • Knowledge management refers to the organizational disciplines, processes, and information technologies used to acquire, create, reveal, and deliver knowledge that allows an enterprise to accomplish its mission (achieve its strategic or business objectives). The components of knowledge management are the people, their operations (practices and processes), and the information technology (IT) that move and transform data, information, and knowledge. All three of these components make up the entity we call the enterprise.
  • Intelligence refers to a special kind of knowledge necessary to accomplish a mission—the kind of strategic knowledge that reveals critical threats and opportunities that may jeopardize or assure mission accomplishment. Intelligence often reveals hidden secrets or conveys a deep understanding that is covered by complexity, deliberate denial, or out- right deception. The intelligence process has been described as the process of the discovery of secrets by secret means. In business and in national security, secrecy is a process of protection for one party; discovery of the secret is the object of competition or security for the competitor or adversary… While a range of definitions of intelligence exist, perhaps the most succinct is that offered by the U.S. Central Intelligence Agency (CIA): “Reduced to its simplest terms, intelligence is knowledge and foreknowledge of the world around us—the prelude to decision and action by U.S. policymakers”
  • The intelligence enterprise encompasses the integrated entity of people, processes, and technologies that collects and analyzes intelligence data to synthesize intelligence products for decision-making consumers.

intelligence (whether national or business) has always involved the management (acquisition, analysis, synthesis, and delivery) of knowledge.

At least three driving factors continue to make this increasing need for automation necessary. These factors include:

  • Breadth of data to be considered.
  • Depth of knowledge to be understood.
  • Speed required for decision making.

Throughout this book, we distinguish between three levels of abstraction of knowledge, each of which may be referred to as intelligence in forms that range from unprocessed reporting to finished intelligence products

  1. Individual observations, measurements, and primitive messages form the lowest level. Human communication, text messages, electronic queries, or scientific instruments that sense phenomena are the major sources of data. The terms raw intelligence and evidence (data that is determined to be relevant) are frequently used to refer to elements of data.
  2. Information. Organized sets of data are referred to as information. The organization process may include sorting, classifying, or indexing and linking data to place data elements in relational context for subsequent searching and analysis.
  3. Information once analyzed, understood, and explained is knowledge or foreknowledge (predictions or forecasts). In the context of this book, this level of understanding is referred to as the intelligence product. Understanding of information provides a degree of comprehension of both the static and dynamic relationships of the objects of data and the ability to model structure and past (and future) behavior of those objects. Knowledge includes both static con- tent and dynamic processes.

These abstractions are often organized in a cognitive hierarchy, which includes a level above knowledge: human wisdom.

In this text, we consider wisdom to be a uniquely human cognitive capability—the ability to correctly apply knowledge to achieve an objective. This book describes the use of IT to support the creation of knowledge but considers wisdom to be a human capacity out of the realm of automation and computation.

1.1 Knowledge in a Changing World

This strategic knowledge we call intelligence has long been recognized as a precious and critical commodity for national leaders.

the Hebrew leader Moses commissioned and documented an intelligence operation to explore the foreign land of Canaan. That classic account clearly describes the phases of the intelligence cycle, which proceeds from definition of the requirement for knowledge through planning, tasking, collection, and analysis to the dissemination of that knowledge. He first detailed the intelligence requirements by describing the eight essential elements of information to be collected, and he described the plan to covertly enter and reconnoiter the denied area

requirements articulation, planning, collection, analysis-synthesis, and dissemination

The U.S. defense community has developed a network-centric approach to intelligence and warfare that utilizes the power of networked information to enhance the speed of command and the efficiency of operations. Sensors are linked to shooters, commanders efficiently coordinate agile forces, and engagements are based on prediction and preemption. The keys to achieving information superiority in this network-centric model are network breadth (or connectivity) and bandwidth; the key technology is information networking.

The ability to win will depend upon the ability to select and convert raw data into accurate decision-making knowledge. Intelligence superiority will be defined by the ability to make decisions most quickly and effectively—with the same information available to virtually all parties. The key enabling technology in the next century will become processing and cognitive power to rapidly and accurately convert data into com- prehensive explanations of reality—sufficient to make rapid and complex decisions.

Consider several of the key premises about the significance of knowledge in this information age that are bringing the importance of intelligence to the forefront. First, knowledge has become the central resource for competitive advantage, displacing raw materials, natural resources, capital, and labor. This resource is central to both wealth creation and warfare waging. Second, the management of this abstract resource is quite complex; it is more difficult (than material resources) to value and audit, more difficult to create and exchange, and much more difficult to protect. Third, the processes for producing knowledge from raw data are as diverse as the manufacturing processes for physical materials, yet are implemented in the same virtual manufacturing plant—the computer. Because of these factors, the management of knowledge to produce strategic intelligence has become a necessary and critical function within nations-states and business enterprises—requiring changes in culture, processes, and infrastructure to compete.

with rapidly emerging information technologies, the complexities of globalization and diverse national interests (and threats), businesses and militaries must both adopt radically new and innovative agendas to enable continuous change in their entire operating concept. Innovation and agility are the watchwords for organizations that will remain competitive in Hamel’s age of nonlinear revolution.

Business concept innovation will be the defining competitive advantage in the age of revolution. Business concept innovation is the capacity to reconceive existing business models in ways that create new value for customers, rude surprises for competitors, and new wealth for investors. Business concept innovation is the only way for newcomers to succeed in the face of enormous resource disadvantages, and the only way for incumbents to renew their lease on success

 

A functional taxonomy based on the type of analysis and the temporal distinction of knowledge and foreknowledge (warning, prediction, and forecast) distinguishes two primary categories of analysis and five subcategories of intelligence products

Descriptive analyses provide little or no evaluation or interpretation of collected data; rather, they enumerate collected data in a fashion that organizes and structures the data so the consumer can perform subsequent interpretation.

Inferential analyses require the analysis of collected relevant data sets (evidence) to infer and synthesize explanations that describe the mean- ing of the underlying data. We can distinguish four different focuses of inferential analysis:

  1. Analyses that explain past events (How did this happen? Who did it?);
  2. Analyses that explain the structure of current structure (What is the organization? What is the order of battle?);
  3. Analyses that explain current behaviors and states (What is the competitor’s research and development process? What is the status of development?);
  4. Foreknowledge analyses that forecast future attributes and states (What is the expected population and gross national product growth over the next 5 years? When will force strength exceed that of a country’s neighbors? When will a competitor release a new product?).

1.3 The Intelligence Disciplines and Applications

While the taxonomy of intelligence products by analytic methods is fundamental, the more common distinctions of intelligence are by discipline or consumer.

The KM processes and information technologies used in all cases are identical (some say, “bits are bits,” implying that all digital data at the bit level is identical), but the content and mission objectives of these four intelligence disciplines are unique and distinct.

Nation-state security interests deal with sovereignty; ideological, political, and economic stability; and threats to those areas of national interest. Intelligence serves national leadership and military needs by providing strategic policymaking knowledge, warnings of foreign threats to national secu- rity interests (economic, military, or political) and tactical knowledge to support day-to-day operations and crisis responses. Nation-state intelligence also serves a public function by collecting and consolidating open sources of foreign information for analysis and publication by the government on topics of foreign relations, trade, treaties, economies, humanitarian efforts, environmental concerns, and other foreign and global interests to the public and businesses at large.

Similar to the threat-warning intelligence function to the nation-state, business intelligence is chartered with the critical task of foreseeing and alerting management of marketplace discontinuities. The consumers of business intelligence range from corporate leadership to employees who access supply-chain data, and even to customers who access information to support purchase decisions.

A European Parliament study has enumerated concern over the potential for national intelligence sources to be used for nation-state economic advantages by providing competitive intelligence directly to national business interests. The United States has acknowledged a policy of applying national intelligence to protect U.S. business interests from fraud and illegal activities, but not for the purposes of providing competitive advantage

1.3.1 National and Military Intelligence

National intelligence refers to the strategic knowledge obtained for the leadership of nation-states to maintain national security. National intelligence is focused on national security—providing strategic warning of imminent threats, knowledge on the broad spectrum of threats to national interests, and fore-knowledge regarding future threats that may emerge as technologies, economies, and the global environment changes.

The term intelligence refers to both a process and its product.

The U.S. Department of Defense (DoD) provides the following product definitions that are rich in description of the processes involved in producing the product:

  1. The product resulting from the collection, processing, integration, analysis, evaluation, and interpretation of available information concerning foreign countries or areas;
  2. Information and knowledge about an adversary obtained through observation, investigation, analysis, or understanding.

Michael Herman accurately emphasizes the essential components of the intelligence process: “The Western intelligence system is two things. It is partly the collection of information by special means; and partly the subsequent study of particular subjects, using all available information from all sources. The two activities form a sequential process.”

Martin Libicki has provided a practical definition of information dominance, and the role of intelligence coupled with command and control and information warfare:

Information dominance may be defined as superiority in the generation, manipulation, and use of information sufficient to afford its possessors military dominance. It has three sources:

  • Command and control that permits everyone to know where they (and their cohorts) are in the battlespace, and enables them to execute operations when and as quickly as necessary.
  • Intelligence that ranges from knowing the enemy’s dispositions to knowing the location of enemy assets in real-time with sufficient precision for a one-shot kill.
  • Information warfare that confounds enemy information systems at various points (sensors, communications, processing, and command), while protecting one’s own.

 

The superiority is achieved by gaining superior intelligence and protecting information assets while fiercely degrading the enemy’s information assets. The goal of such superiority is not the attrition of physical military assets or troops—it is the attrition of the quality, speed, and utility of the adversary’s decision-making ability.

“A knowledge environment is an organizations (business) environment that enhances its capability to deliver on its mission (competitive advantage) by enabling it to build and leverage it intellectual capital.”

1.3.2 Business and Competitive Intelligence

The focus of business intelligence is on understanding all aspects of a business enterprise: internal operations and the external environment, which includes customers and competitors (the marketplace), partners, and suppliers. The external environmental also includes independent variables that can impact the business, depending on the business (e.g., technology, the weather, government policy actions, financial markets). All of these are the objects of business intelligence in the broadest definition. But the term business intelligence is also used in a narrower sense to focus on only the internals of the business, while the term competitor intelligence refers to those aspects of intelligence that focus on the externals that influence competitiveness: competitors.

Each of the components of business intelligence has distinct areas of focus and uses in maintaining the efficiency, agility, and security of the business; all are required to provide active strategic direction to the business. In large companies with active business intelligence operations, all three components are essential parts of the strategic planning process, and all contribute to strategic decision making.

1.4 The Intelligence Enterprise

The intelligence enterprise includes the collection of people, knowledge (both internal tacit and explicitly codified), infrastructure, and information processes that deliver critical knowledge (intelligence) to the consumers. This enables them to make accurate, timely, and wise decisions to accomplish the mission of the enterprise.

This definition describes the enterprise as a process—devoted to achieving an objective for its stakeholders and users. The enterprise process includes the production, buying, selling, exchange, and promotion of an item, substance, service, or system.

the DoD three-view architecture description, which defines three interrelated perspectives or architectural descriptions that define the operational, system, and technical aspects of an enterprise [29]. The operational architecture is a people- or organization-oriented description of the operational elements, intelligence business processes, assigned tasks, and information and work flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and the tasks that are supported by these information exchanges. The systems architecture is a description of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users, and specifies system and component performance parameters. The technical architecture is the minimal set of rules (i.e., standards, protocols, interfaces, and services) governing the arrangement, interaction, and interdependence of the elements of the system.

 

These three views of the enterprise (Figure 1.4) describe three layers of people-oriented operations, system structure, and procedures (protocols) that must be defined in order to implement an intelligence enterprise.

The operational layer is the highest (most abstract) description of the concept of operations (CONOPS), human collaboration, and disciplines of the knowledge organization. The technical architecture layer describes the most detailed perspective, noting specific technical components and their operations, protocols, and technologies.

The intelligence supply chain that describes the flow of data into knowledge to create consumer value is measured by the value it provides to intelligence consumers. Measures of human intellectual capital and organizational knowledge describe the intrinsic value of the organization.

1.5 The State of the Art and the State of the Intelligence Tradecraft

The subject of intelligence analysis remained largely classified through the 1980s, but the 1990s brought the end of the Cold War and, thus, open publication of the fundamental operations of intelligence and the analytic methods employed by businesses and nation-states. In that same period, the rise of commercial information sources and systems produced the new disciplines of open source intelligence (OSINT) and business/competitor intelligence. In each of these areas, a wealth of resources is available for tracking the rapidly changing technology state of the art as well as the state of the intelligence tradecraft.

1.5.1 National and Military Intelligence

Numerous sources of information provide management, legal, and technical insight for national and military intelligence professionals with interests in analysis and KM

These sources include:

  • Studies in Intelligence—Published by the U.S. CIA Center for the Study of Intelligence and the Sherman Kent School of Intelligence, unclassified versions are published on the school’s Web site (http://odci. gov.csi), along with periodically issued monographs on technical topics related to intelligence analysis and tradecraft.
  • International Journal of Intelligence and Counterintelligence—This quarterly journal covers the breadth of intelligence interests within law enforcement, business, nation-state policymaking, and foreign affairs.
  • Intelligence and National Security—A quarterly international journal published by Frank Cass & Co. Ltd., London, this journal covers broad intelligence topics ranging from policy, operations, users, analysis, and products to historical accounts and analyses.
  • Defense Intelligence Journal—This is a quarterly journal published by the U.S. Defense Intelligence Agency’s Joint Military Intelligence College.
  • American Intelligence Journal—Published by the National Military Intelligence Association (NMIA), this journal covers operational, organizational, and technical topics of interest to national and military intelligence officers.
  • Military Intelligence Professional Bulletin—This is a quarterly bulletin of the U.S. Army Intelligence Center (Ft. Huachuca) that is available on- line and provides information to military intelligence officers on studies of past events, operations, processes, military systems, and emerging research and development.
  • Jane’s Intelligence Review—This monthly magazine provides open source analyses of international military organizations, NGOs that threaten or wage war, conflicts, and security issues.

1.5.2 Business and Competitive Intelligence

Several sources focus on the specific areas of business and competitive intelligence with attention to the management, ethical, and technical aspects of collection, analysis, and valuation of products.

  • Competitive Intelligence Magazine—This is a CI source for general applications-related articles on CI, published bimonthly by John Wiley & Sons with the Society for Competitive Intelligence (SCIP).
  • Competitive Intelligence Review—This quarterly journal, also published by John Wiley with the SCIP, contains best-practice case studies as well as technical and research articles.
  • Management International Review—This is a quarterly refereed journal that covers the advancement and dissemination of international applied research in the fields of management and business. It is published by Gabler Verlag, Germany, and is available on-line.
  • Journal of Strategy and Business—This quarterly journal, published by Booz Allen and Hamilton focuses on strategic business issues, including regular emphasis on both CI and KM topics in business articles.

1.5.3 KM

The developments in the field of KM are covered by a wide range of business, information science, organizational theory, and dedicated KM sources that pro- vide information on this diverse and fast growing area.

  • CIO Magazine—This monthly trade magazine for chief information officers and staff includes articles on KM, best practices, and related leadership topics.
  • Harvard Business Review, Sloan Management Review—These management journals cover organizational leadership, strategy, learning and change, and the application of supporting ITs.
  • Journal of Knowledge Management—This is a quarterly academic journal of strategies, tools, techniques, and technologies published by Emerald (UK). In addition, Emerald also publishes quarterly The Learning Organization—An International Journal.
  • IEEE Transactions of Knowledge and Data Engineering—This is an archival journal published bimonthly to inform researchers, developers, managers, strategic planners, users, and others interested in state-of- the-art and state-of-the-practice activities in the knowledge and data engineering area.
  • Knowledge and Process Management—A John Wiley (UK) journal for executives responsible for leading performance improvement and con- tributing thought leadership in business. Emphasis areas include KM, organizational learning, core competencies, and process management.
  • American Productivity and Quality Center (APQC)—THE APQC is a nonprofit organization that provides the tools, information, expertise, and support needed to discover and implement best practices in KM. Its mission is to discover, research, and understand emerging and effective methods of both individual and organizational improvement, to broadly disseminate these findings, and to connect individuals with one another and with the knowledge, resources, and tools they need to successfully manage improvement and change. They maintain an on-line site at www.apqc.org.
  • Data Mining and Knowledge Discovery—This Kluwer (Netherlands) journal provides technical articles on the theory, techniques, and practice of knowledge extraction from large databases.

1.6 The Organization of This Book

This book is structured to introduce the unique role, requirements, and stake- holders of intelligence (the applications) before introducing the KM processes, technologies, and implementations.

2
The Intelligence Enterprise

Intelligence, the strategic information and knowledge about an adversary and an operational environment obtained through observation, investigation, analysis, or understanding, is the product of an enterprise operation that integrates people and processes in a organizational and networked computing environment.

The intelligence enterprise exists to produce intelligence goods and service—knowledge and foreknowledge to decision- and policy-making customers. This enterprise is a production organization whose prominent infrastructure is an information supply chain. As in any business, it has a “front office” to manage its relations with customers, with the information supply chain in the “back office.”

The intellectual capital of this enterprise includes sources, methods, workforce competencies, and the intelligence goods and services produced. As in virtually no other business, the protection of this capital is paramount, and therefore security is integrated into every aspect of the enterprise.

2.1 The Stakeholders of Nation-State Intelligence

The intelligence enterprise, like any other enterprise providing goods and services, includes a diverse set of stakeholders in the enterprise operation. The business model for any intelligence enterprise, as for any business, must clearly identify the stakeholders who own the business and those who produce and consume its goods and services.

  • The owners of the process include the U.S. public and its elected officials, who measure intelligence value in terms of the degree to which national security is maintained. These owners seek awareness and warning of threats to prescribed national interests.
  • Intelligence consumers (customers or users) include national, military, and civilian user agencies that measure value in terms of intelligence contribution to the mission of each organization, measured in terms of its impact on mission effectiveness.
  • Intelligence producers, the most direct users of raw intelligence, include the collectors (HUMINT and technical), processor agencies, and analysts. The principal value metrics of these users are performance based: information accuracy, coverage breadth and depth, confidence, and timeliness.

The purpose and value chains for intelligence (Figure 2.2) are defined by the stakeholders to provide a foundation for the development of specific value measures that assess the contribution of business components to the overall enterprise. The corresponding chains in the U.S. IC include:

  • Source—the source or basis for defining the purpose of intelligence is found in the U.S. Constitution, derivative laws (i.e., the National Security Act of 1947, Central Intelligence Agency Act of 1949, National Security Agency Act of 1959, Foreign Intelligence Surveillance Act of 1978, and Intelligence Organization Act of 1992), and orders of the executive branch [2]. Derived from this are organizational mission documents, such as the Director of Central Intelligence (DCI) Strategic Intent [3], which documents communitywide purpose and vision, as well as derivative guidance documents prepared by intelligence providers.
  • Purpose chain—the causal chain of purposes (objectives) for which the intelligence enterprise exists. The ultimate purpose is national security, enabled by information (intelligence) superiority that, in turn, is enabled by specific purposes of intelligence providers that will result in information superiority.
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified and articulated by stakeholders and by which the value of the intelligence enterprise is evaluated.

In a similar fashion, business and competitive intelligence have stakeholders that include customers, shareholders, corporate officers, and employees… there must exist a purpose and value chain that guides the KM operations. These typically include:

  • Source—the business charter and mission statement of a business elaborates the market served and the vision for the businesses role in that market.
  • Purpose chain—the objectives of the business require knowledge about internal operations and the market (BI objectives) as well as competitors (CI).
  • Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
  • Measures—Specific metrics by which values are quantified. A balanced set of measures includes vision and strategy, customer, internal, financial, and learning-growth metrics.

2.2 Intelligence Processes and Products

The process that delivers strategic and operational intelligence products is gener- ally depicted in cyclic form (Figure 2.3), with five distinct phases.

In every case, the need is the basis for a logical process to deliver the knowledge to the requestor.

  1. Planning and direction. The process begins as policy and decision makers define, at a high level of abstraction, the knowledge that is required to make policy, strategic, or operational decisions. The requests are parsed into information required, then to data that must be collected to estimate or infer the required answers. Data requirements are used to establish a plan of collection, which details the elements of data needed and the targets (people, places, and things) from which the data may be obtained.
  2. Collection. Following the plan, human and technical sources of data are tasked to collect the required raw data. The next section introduces the major collection sources, which include both openly available and closed sources that are accessed by both human and technical methods.

These sources and methods are among the most fragile [5]—and most highly protected—elements of the process. Sensitive and specially compartmented collection capabilities that are particularly fragile exist across all of the collection disciplines.

  1. Processing. The collected data is processed (e.g., machine translation, foreign language translation, or decryption), indexed, and organized in an information base. Progress on meeting the requirements of the col- lection plan is monitored and the tasking may be refined on the basis of received data.
  2. All-source analysis-synthesis and production. The organized information base is processed using estimation and inferential (reasoning) techniques that combine all-source data in an attempt to answer the requestor’s questions. The data is analyzed (broken into components and studied) and solutions are synthesized (constructed from the accumulating evidence). The topics or subjects (intelligence targets) of study are modeled, and requests for additional collection and processing may be made to acquire sufficient data and achieve a sufficient level of understanding (or confidence to make a judgment) to answer the consumer’s questions.
  3. Dissemination. Finished intelligence is disseminated to consumers in a variety of formats, ranging from dynamic operating pictures of war- fighters’ weapon systems to formal reports to policymakers. Three categories of formal strategic and tactical intelligence reports are distinguished by their past, present, and future focus: current intelligence reports are news-like reports that describe recent events or indications and warnings, basic intelligence reports provide complete descriptions of a specific situation (e.g., order of battle or political situation), and intelligence estimates attempt to predict feasible future outcomes as a result of current situation, constraints, and possible influences [6].

Though introduced here in the classic form of a cycle, in reality the process operates as a continuum of actions with many more feedback (and feedforward) paths that require collaboration between consumers, collectors, and analysts.

2.3 Intelligence Collection Sources and Methods

A taxonomy of intelligence data sources includes sources that are openly accessible or closed (e.g., denied areas, secured communications, or clandestine activities). Due to the increasing access to electronic media (i.e., telecommunications, video, and computer networks) and the global expansion of democratic societies, OSINT is becoming an increasingly important source of global data. While OSINT must be screened and cross validated to filter errors, duplications, and deliberate misinformation (as do all sources), it provides an economical source of public information and is a contributor to other sources for cueing, indications, and confirmation

Measurements and signatures intelligence (MASINT) is technically derived knowledge from a wide variety of sensors, individual or fused, either to perform special measurements of objects or events of interest or to obtain signatures for use by the other intelligence sources. MASINT is used to characterize the observable phenomena (observables) of the environment and objects of surveillance.

U.S. intelligence studies have pointed out specific changes in the use of these sources as the world increases globalization of commerce and access to social, political, economic, and technical information [10–12]:

  • The increase in unstructured and transnational threats requires the robust use of clandestine HUMINT sources to complement extensive technical verification means.
  • Technical means of collection are required for both broad area coverage and detailed assessment of the remaining denied areas of the world.

2.3.1 HUMINT Collection

HUMINT refers to all information obtained directly from human sources

HUMINT sources may be overt or covert (clandestine); the most common categories include:

  • Clandestine intelligence case officers. These officers are own-country individuals who operate under a clandestine “cover” to collect intelligence and “control” foreign agents to coordinate collections.
  • Agents. These are foreign individuals with access to targets of intelligence who conduct clandestine collection operations as representatives of their controlling intelligence officers. These agents may be recruited or “walk-in” volunteers who act for a variety of ideological, financial, or personal motives.
  • Émigrés, refugees, escapees, and defectors. The open, overt (yet discrete) programs to interview these recently arrived foreign individuals provide background information on foreign activities as well as occasional information on high-value targets.
  • Third party observers. Cooperating third parties (e.g., third-party countries and travelers) can also provide a source of access to information.

The HUMINT discipline follows a rigorous process for acquiring, employing, and terminating the use of human assets that follows a seven-step sequence. The sequence followed by case officers includes:

  1. Spotting—locating, identifying, and securing low-level contact with agent candidates;
  2. Evaluation—assessment of the potential (i.e., value or risk) of the spotted individual, based on a background investigation;
  3. Recruitment—securing the commitment from the individual;
  4. Testing—evaluation of the loyalty of the agent;
  5. Training—supporting the agent with technical experience and tools;
  6. Handling—supporting and reinforcing the agent’s commitment;
  7. Termination—completion of the agent assignment by ending the relationship.

 

HUMINT is dependent upon the reliability of the individual source, and lacks the collection control of technical sensors. Furthermore, the level of security to protect human sources often limits the fusion of HUMINT reports with other sources and the dissemination of wider customer bases. Directed high-risk HUMINT collections are generally viewed as a precious resource to be used for high-value targets to obtain information unobtainable by technical means or to validate hypotheses created by technical collection analysis.

2.3.2 Technical Intelligence Collection

Technical collection is performed by a variety of electronic (e.g., electromechanical, electro-optical, or bioelectronic) sensors placed on platforms in space, the atmosphere, on the ground, and at sea to measure physical phenomena (observables) related to the subjects of interest (intelligence targets).

The operational utility of these collectors for each intelligence application depends upon several critical factors:

  • Timeliness—the time from collection of event data to delivery of a tactical targeting cue, operational warnings and alerts, or formal strategic report;
  • Revisit—the frequency with which a target of interest can be revisited to understand or model (track) dynamic behavior;
  • Accuracy—the spatial, identity, or kinematic accuracy of estimates and predictions;
  • Stealth—the degree of secrecy with which the information is gathered and the measure of intrusion required.

2.4 Collection and Process Planning

The technical collection process requires the development of a detailed collection plan, which begins with the decomposition of the subject target into activities, observables, and then collection requirements.

From this plan, technical collectors are tasked and data is collected and fused (a composition, or reconstruction that is the dual of the decomposition process) to derive the desired intelligence about the target.

2.5 KM in the Intelligence Process

The intelligence process must deal with large volumes of source data, converting a wide range of text, imagery, video, and other media types into organized information, then performing the analysis-synthesis process to deliver knowledge in the form of intelligence products.

IT is providing increased automation of the information indexing, discovery, and retrieval (IIDR) functions for intelligence, especially the exponentially increasing volumes of global open-source data.

 

The functional information flow in an automated or semiautomated facility (depicted in Figure 2.5) requires digital archiving and analysis to ingest continu- ous streams of data and manage large volumes of analyzed data. The flow can be broken into three phases:

  1. Capture and compile;
    2. Preanalysis;
    3. Exploitation (analysis-synthesis).

The preanalysis phase indexes each data item (e.g., article, message, news segment, image, book or chapter) by assigning a reference for storage; generating an abstract that summarizes the content of the item and metadata with a description of the source, time, reliability-confidence, and relationship to other items (abstracting); and extracting critical descriptors of content that characterize the contents (e.g., keywords) or meaning (deep indexing) of the item for subsequent analysis. Spatial data (e.g., maps, static imagery, or video imagery) must be indexed by spatial context (spatial location) and content (imagery content).

The indexing process applies standard subjects and relationships, maintained in a lexicon and thesaurus that is extracted from the analysis information base. Fol- lowing indexing, data items are clustered and linked before entry into the analy- sis base. As new items are entered, statistical analyses are performed to monitor trends or events against predefined templates that may alert analysts or cue their focus of attention in the next phase of processing.

The categories of automated tools that are applied to the analysis information base include the following tools:

  • Interactive search and retrieval tools permit analysts to search by content, topic, or related topics using the lexicon and thesaurus subjects.
  • Structured judgment analysis tools provide visual methods to link data, synthesize deductive logic structures, and visualize complex relation- ships between data sets. These tools enable the analyst to hypothesize, explore, and discover subtle patterns and relationships in large data volumes—knowledge that can be discerned only when all sources are viewed in a common context.
  • Modeling and simulation tools model hypothetical activities, allowing modeled (expected) behavior to be compared to evidence for validation or projection of operations under scrutiny.
  • Collaborative analysis tools permit multiple analysts in related subject areas, for example, to collaborate on the analysis of a common subject.
  • Data visualization tools present synthetic views of data and information to the analyst to permit patterns to be examined and discovered.

2.6 Intelligence Process Assessments and Reengineering

The U.S. IC has been assessed throughout and since the close of the Cold War to study the changes necessary to adapt to advanced collection capabilities, changing security threats, and the impact of global information connectivity and information availability. Published results of these studies provide insight into the areas of intelligence effectiveness that may be enhanced by organizing the community into a KM enterprise. We focus here on the technical aspects of the changes rather than the organizational aspects recommended in numerous studies.

2.6.1 Balancing Collection and Analysis

Intelligence assessments have evaluated the utility of intelligence products and the balance of investment between collection and analysis.

2.6.2 Focusing Analysis-Synthesis

An independent study [21] of U.S. intelligence recommended a need for intelligence to sharpen the focus of analysis-synthesis resources to deal with the increased demands by policymakers for knowledge on a wider ranges of topics, the growing breadth of secret and open sources, and the availability of commercial open-source analysis.

2.6.3

Balancing Analysis-Synthesis Processes

One assessment conducted by the U.S. Congress reviewed the role of analysis- synthesis and the changes necessary for the community to reengineer its processes from a Cold War to a global awareness focus. Emphasizing the crucial role of analysis, the commission noted:

The raison d’etre of the Intelligence Community is to provide accurate and meaningful information and insights to consumers in a form they can use at the time they need them. If intelligence fails to do that, it fails altogether. The expense and effort invested in collecting and processing the information have gone for naught.

The commission identified the KM challenges faced by large-scale intelligence analysis that encompasses global issues and serves a broad customer base.

The commission’s major observations provide insight into the emphasis on people- related (rather than technology-related) issues that must be addressed for intelligence to be valued by the policy and decision makers that consume intelligence:

  1. Build relationships. A concerted effort is required to build relationships between intelligence producers and the policymakers they serve. Producer-consumer relationships range from assignment of intelligence liaison officers with consumers (the closest relationship and greatest consumer satisfaction) to holding regular briefings, or simple producer-subscriber relationships for general broadcast intelligence. Across this range of relationships, four functions must be accomplished for intelligence to be useful:
  • Analysts must understand the consumer’s level of knowledge and the issues they face.
  • Intelligence producers must focus on issues of significance and make information available when needed, in a format appropriate to the unique consumer.
  • Consumers must develop an understanding of what intelligence can and—equally important—cannot do.
  • Both consumer and producer must be actively engaged in a dialogue with analysts to refine intelligence support to decision making.
  1. Increase and expand the scope of analytic expertise. The expertise of the individual analysts and the community of analysts must be maintained at the highest level possible. This expertise is in two areas: domain, or region of focus (e.g., nation, group, weapon systems, or economics), and analytic-synthetic tradecraft. Expertise development should include the use of outside experts, travel to countries of study, sponsor- ship of topical conferences, and other means (e.g., simulations and peer reviews).
  2. Enhance use of open sources. Open-source data (i.e., publicly available data in electronic and broadcast media, journals, periodicals, and commercial databases) should be used to complement (cue, provide con- text, and in some cases, validate) special, or closed, sources. The analyst must have command of all available information and the means to access and analyze both categories of data in complementary fashion.
  3. Make analysis available to users. Intelligence producers must increasingly apply dynamic, electronic distribution means to reach consumers for collaboration and distribution. The DoD Joint Deployable Intelligence Support System (JDISS) and IC Intelink were cited as early examples of networked intelligence collaboration and distribution systems.
  4. Enhance strategic estimates. The United States produces national intelligence estimates (NIEs) that provide authoritative statements and fore- cast judgments about the likely course of events in foreign countries and their implications for the United States. These estimates must be enhanced to provide timely, objective, and relevant data on a wider range of issues that threaten security.
  5. Broaden the analytic focus. As the national security threat envelope has broadened (beyond the narrower focus of the Cold War), a more open, collaborative environment is required to enable intelligence analysts to interact with policy departments, think tanks, and academia to analyze, debate, and assess these new world issues.

In the half decade since the commission recommendations were published, the United States has implemented many of the recommendations. Several examples of intelligence reengineering include:

  • Producer-consumer relationships. The introduction of collaborative networks, tools, and soft-copy products has permitted less formal interaction and more frequent exchange between consumers and producers. This allows intelligence producers to better understand consumer needs and decision criteria. This has enabled the production of more focused, timely intelligence.
  • Analytic expertise. Enhancements in analytic training and the increased use of computer-based analytic tools and even simulation are providing greater experience—and therefore expertise—to human analysts.
  • Open source. Increased use of open-source information via commercial providers (e.g., Lexis NexisTM subscription clipping services to tailored topics) and the Internet has provided an effective source for obtaining background information. This enables special sources and methods to focus on validation of critical implications.
  • Analysis availability. The use of networks continues to expand for both collaboration (between analysts and consumers as well as between analysts) and distribution. This collaboration was enabled by the intro- duction and expansion of the classified Internet (Intelink) that interconnects the IC [24].
  • Broadened focus. The community has coordinated open panels to dis- cuss, debate, and collaboratively analyze and openly publish strategic perspectives of future security issues. One example is the “Global Trends 2015” report that resulted from a long-term collaboration with academia, the private sector, and topic area experts [25].

2.7 The Future of Intelligence

The two primary dimensions of future threats to national (and global) security include the source (from nation-state actors to no-state actors) and the threat-generating mechanism (continuous results of rational nation-state behaviors to discontinuities in complex world affairs). These threat changes and the contrast in intelligence are summarized in Table 2.4. Notice that these changes coincide with the transition from sensor-centric to network- and knowledge-centric approaches to intelligence introduced in Chapter 1.

intelligence must focus on knowledge creation in an enterprise environment that is prepared to rapidly reinvent itself to adapt to emergent threats.

3
Knowledge Management Processes

KM is the term adopted by the business community in the mid 1990s to describe a wide range of strategies, processes, and disciplines that formalize and integrate an enterprise’s approach to organizing and applying its knowledge assets. Some have wondered what is truly new about the concept of managing knowledge. Indeed, many pure knowledge-based organizations (insurance companies, consultancies, financial management firms, futures brokers, and of course, intelligence organizations) have long “managed” knowledge—and such management processes have been the core competency of the business.

The scope of knowledge required by intelligence organizations has increased in depth and breadth as commerce has networked global markets and world threats have diversified from a monolithic Cold War posture. The global reach of networked information, both open and closed sources, has produced a deluge of data—requiring computing support to help human analysts sort, locate, and combine specific data elements to provide rapid, accurate responses to complex problems. Finally, the formality of the KM field has grown significantly in the past decade—developing theories for valuing, auditing, and managing knowledge as an intellectual asset; strategies for creating, reusing, and leveraging the knowledge asset; processes for con- ducting collaborative transactions of knowledge among humans and machines; and network information technologies for enabling and accelerating these processes.

3.1 Knowledge and Its Management

In the first chapter, we introduced the growing importance of knowledge as the central resource for competition in both the nation-state and in business. Because of this, the importance of intelligence organizations providing strategic knowledge to public- and private-sector decision makers is paramount. We can summarize this importance of intelligence to the public or private enterprise in three assertions about knowledge.

First, knowledge has become the central asset or resource for competitive advantage. In the Tofflers’ third wave, knowledge displaces capital, labor, and natural resources as the principal reserve of the enterprise. This is true in wealth creation by businesses and in national security and the conduct of warfare for nation-states.

Second, it is asserted that the management of the knowledge resource is more complex than other resources. The valuation and auditing of knowledge is unlike physical labor or natural resources; knowledge is not measured by “head counts” or capital valuation of physical inventories, facilities, or raw materials (like stockpiles of iron ore, fields of cotton, or petroleum reserves). New methods of quantifying the abstract entity of knowledge—both in people and in explicit representations—are required. In order to accomplish this complex challenge, knowledge managers must develop means to capture, store, create, and exchange knowledge, while dealing with the sensitive security issues of knowing when to protect and when to share (the trade-off between the restrictive “need to know” and the collaborative “need to share”).

The third assertion about knowledge is that its management therefore requires a delicate coordination of people, processes, and supporting technologies to achieve the enterprise objectives of security, stability, and growth in a dynamic world:

  • People. KM must deal with cultures and organizational structures that enable and reward the growth of knowledge through collaborative learning, reasoning, and problem solving.
  • Processes. KM must also provide an environment for exchange, discovery, retention, use, and reuse of knowledge across the organization.
  • Technologies. Finally, IT must be applied to enable the people and processes to leverage the intellectual asset of actionable knowledge.

 

Definitions of KM as a formal activity are as diverse as its practitioners (Table 3.1), but all have in common the following general characteristics:

KM is based on a strategy that accepts knowledge as the central resource to achieve business goals and that knowledge—in the minds of its people, embedded in processes, and in explicit representations in knowledge bases—must be regarded as an intellectual form of capital to be leveraged. Organizational values must be coupled with the growth of this capital.

KM involves a process that, like a supply chain, moves from raw materials (data) toward knowledge products. The process is involved in acquiring (data), sorting, filtering, indexing and organizing (information), reasoning (analyzing and synthesizing) to create knowledge, and finally disseminating that knowledge to users. But this supply chain is not a “stovepiped” process (a narrow, vertically integrated and compartmented chain); it horizontally integrates the organization, allowing collaboration across all areas of the enterprise where knowledge sharing provides benefits.

KM embraces a discipline and cultural values that accept the necessity for sharing purpose, values, and knowledge across the enterprise to leverage group diversity and perspectives to promote learning and intellectual problem solving. Collaboration, fully engaged communication and cognition, is required to network the full intellectual power of the enterprise.

The U.S. National Security Agency (NSA) has adopted the following “people-oriented” definition of KM to guide its own intelligence efforts:

Strategies and processes to create, identify, capture, organize and leverage vital skills, information and knowledge to enable people to best accomplish the organizational mission.7ryfcv

The DoD has further recognized that KM is the critical enabler for information superiority:

The ability to achieve and sustain information superiority depends, in large measure, upon the creation and maintenance of reusable knowledge bases; the ability to attract, train, and retain a highly skilled work force proficient in utilizing these knowledge bases; and the development of core business processes designed to capitalize upon these assets.

The processes by which abstract knowledge results in tangible effects can be examined as a net of influences that effect knowledge creation and decision making.

The flow of influences in the figure illustrates the essential contributions of shared knowledge.

  1. Dynamic knowledge. At the central core is a comprehensive and dynamic understanding of the complex (business or national security) situation that confronts the enterprise. This understanding accumulates over time to provide a breadth and depth of shared experience, or organizational memory.
  2. Critical and systems thinking. Situational understanding and accumulated experience enables dynamic modeling to provide forecasts from current situations—supporting the selection of adapting organizational goals. Comprehensive understanding (perception) and thorough evaluation of optional courses of actions (judgment) enhance decision making. As experience accumulates and situational knowledge is refined, critical explicit thinking and tacit sensemaking about current situations and the consequences of future actions is enhanced.
  3. Shared operating picture. Shared pictures of the current situation (common operating picture), past situations and outcomes (experience), and forecasts of future outcomes enable the analytic workforce to collaborate and self-synchronize in problem solving.
  4. Focused knowledge creation. Underlying these functions is a focused data and experience acquisition process that tracks and adapts as the business or security situation changes.

While Figure 3.1 maps the general influences of knowledge on goal setting, judgment, and decision making in an enterprise, an understanding of how knowledge influences a particular enterprise in a particular environment is necessary to develop a KM strategy. Such a strategy seeks to enhance organizational knowledge of these four basic areas as well as information security to protect the intellectual assets,

3.2 Tacit and Explicit Knowledge

In the first chapter, we offered a brief introduction to hierarchical taxonomy of data, information, and knowledge, but here we must refine our understanding of knowledge and its construct before we delve into the details of management processes.

In this chapter, we distinguish between the knowledge-creation processes within the knowledge-creating hierarchy (Figure 3.2). The hierarchy illustrates the distinctions we make, in common terminology, between explicit (represented and defined) processes and those that are implicit (or tacit; knowledge processes that are unconscious and not readily articulated).

3.2.1 Knowledge As Object

The most common understanding of knowledge is as an object—the accumulation of things perceived, discovered, or learned. From this perspective, data (raw measurements or observations), information (data organized, related, and placed in context), and knowledge (information explained and the underlying processes understood) are also objects. The KM field has adopted two basic distinctions in the categories of knowledge as object:

  1. Explicit knowledge. This is the better known form of knowledge that has been captured and codified in abstract human symbols (e.g., mathematics, logical propositions, and structured and natural language). It is tangible, external (to the human), and logical. This documented knowledge can be stored, repeated, and taught by books because it is impersonal and universal. It is the basis for logical reasoning and, most important of all, it enables knowledge to be communicated electronically and reasoning processes to be automated.
  2. Tacit knowledge. This is the intangible, internal, experiential, and intuitive knowledge that is undocumented and maintained in the human mind. It is a personal knowledge contained in human experience. Philosopher Michael Polanyi pioneered the description of such knowledge in the 1950s, considering the results of Gestalt psychology and the philosophic conflict between moral conscience and scientific skepticism. In The Tacit Dimension, he describes a kind of knowledge that we cannot tell. This tacit knowledge is characterized by intangible fac- tors such as perception, belief, values, skill, “gut” feel, intuition, “know-how,” or instinct; this knowledge is unconsciously internalized and cannot be explicitly described (or captured) without effort.

An understanding of the relationship between knowledge and mind is of particular interest to the intelligence discipline, because these analytic techniques will serve two purposes:

  1. Mind as knowledge manager. Understanding of the processes of exchanging tacit and explicit knowledge will, of course, aid the KM process itself. This understanding will enhance the efficient exchange of knowledge between mind and computer—between internal and external representations.
  2. Mind as intelligence target. Understanding of the complete human processes of reasoning (explicit logical thought) and sensemaking (tacit, emotional insight) will enable more representative modeling of adversarial thought processes. This is required to understand the human mind as an intelligence target—representing perceptions, beliefs, motives, and intentions

Previously, we have used the terms resource and asset to describe knowledge, but it is not only an object or a commodity to be managed. Knowledge can also be viewed as a dynamic, embedded in processes that lead to action. In the next section, we explore this complementary perspective of knowledge.

3.2.2 Knowledge As Process

Knowledge can also be viewed as the action, or dynamic process of creation, that proceeds from unstructured content to structured understanding. This perspective considers knowledge as action—as knowing. Because knowledge explains the basis for information, it relates static information to a dynamic reality. Knowing is uniquely tied to the creation of meaning.

Karl Weick introduced the term sensemaking to describe the tacit knowing process of retrospective rationality—the method by which individuals and organizations seek to rationally account for things by going back in time to structure events and explanations holistically. We do this, to “make sense” of reality, as we perceive it, and create a base of experience, shared meaning, and understanding.

To model and manage the knowing process of an organization requires attention to both of these aspects of knowledge—one perspective emphasizing cognition, the other emphasizing culture and context. The general knowing process includes four basic phases that can be described in process terms that apply to tacit and explicit knowledge, in human and computer terms, respectively.

  1. This process acquires knowledge by accumulating data through human observation and experience or technical sensing and measurement. The capture of e-mail discussion threads, point-of-sales transactions, or other business data, as well as digital imaging or signals analysis are but examples of the wide diversity of acquisition methods.
  1. Maintenance. Acquired explicit data is represented in a standard form, organized, and stored for subsequent analysis and application in digital databases. Tacit knowledge is stored by humans as experience, skill, or expertise, though it can be elicited and converted to explicit form in terms of accounts, stories (rich explanations), procedures, or explanations.
  2. Transformation. The conversion of data to knowledge and knowledge from one form to another is the creative stage of KM. This knowledge-creation stage involves more complex processes like internalization, intuition, and conceptualization (for internal tacit knowledge) and correlation and analytic-synthetic reasoning (for explicit knowledge). In the next subsection, this process is described in greater detail.
  3. Transfer. The distribution of acquired and created knowledge across the enterprise is the fourth phase. Tacit distribution includes the sharing of experiences, collaboration, stories, demonstrations, and hands-on training. Explicit knowledge is distributed by mathematical, graphical, and textual representations, from magazines and textbooks to electronic media.

the three phases of organizational knowing (focusing on culture) described by Davenport and Prusak in their text Working Knowledge [17]:

  1. Generation. Organizational networks generate knowledge by social processes of sharing, exploring, and creating tacit knowledge (stories, experiences, and concepts) and explicit knowledge (raw data, organized databases, and reports). But these networks must be properly organized for diversity of both experience and perspective and placed under appropriate stress (challenge) to perform. Dedicated cross- functional teams, appropriately supplemented by outside experts and provided a suitable challenge, are the incubators for organizational knowledge generation.
  2. Codification and coordination. Codification explicitly represents generated knowledge and the structure of that knowledge by a mapping process. The map (or ontology) of the organization’s knowledge allows individuals within the organization to locate experts (tacit knowledge holders), databases (of explicit knowledge), and tacit-explicit net- works. The coordination process models the dynamic flow of knowledge within the organization and allows the creation of narratives (stories) to exchange tacit knowledge across the organization.
  3. Transfer. Knowledge is transferred within the organization as people interact; this occurs as they are mentored, temporarily exchanged, transferred, or placed in cross-functional teams to experience new perspectives, challenges, or problem-solving approaches.

3.2.3 Knowledge Creation Model

Nonaka and Takeuchi describe four modes of conversion, derived from the possible exchanges between two knowledge types (Figure 3.5):

  1. Tacit to tacit—socialization. Through social interactions, individuals within the organization exchange experiences and mental models, transferring the know-how of skills and expertise. The primary form of transfer is narrative—storytelling—in which rich context is conveyed and subjective understanding is compared, “reexperienced,” and internalized. Classroom training, simulation, observation, mentoring, and on-the-job training (practice) build experience; moreover, these activities also build teams that develop shared experience, vision, and values. The socialization process also allows consumers and producers to share tacit knowledge about needs and capabilities, respectively.
  2. Tacit to explicit—externalization. The articulation and explicit codification of tacit knowledge moves it from the internal to external. This can be done by capturing narration in writing, and then moving to the construction of metaphors, analogies, and ultimately models. Externalization is the creative mode where experience and concept are expressed in explicit concepts—and the effort to express is in itself a creative act. (This mode is found in the creative phase of writing, invention, scientific discovery, and, for the intelligence analyst, hypothesis creation.)
  1. Explicit to explicit—combination. Once explicitly represented, different objects of knowledge can be characterized, indexed, correlated, and combined. This process can be performed by humans or computers and can take on many forms. Intelligence analysts compare multiple accounts, cable reports, and intelligence reports regarding a common subject to derive a combined analysis. Military surveillance systems combine (or fuse) observations from multiple sensors and HUMINT reports to derive aggregate force estimates. Market analysts search (mine) sales databases for patterns of behavior that indicate emerging purchasing trends. Business developers combine market analyses, research and development results, and cost analyses to create strategic plans. These examples illustrate the diversity of the combination processes that combine explicit knowledge.
  2. Explicit to tacit—internalization. Individuals and organizations internalize knowledge by hands-on experience in applying the results of combination. Combined knowledge is tested, evaluated, and results in new tacit experience. New skills and expertise are developed and integrated into the tacit knowledge of individuals and teams.

Nonaka and Takeuchi further showed how these four modes of conversion operate in an unending spiral sequence to create and transfer knowledge throughout the organization

Organizations that have redundancy of information (in people, processes, and databases) and diversity in their makeup (also in people, processes, and databases) will enhance the ability to move along the spiral. The modes of activity benefit from a diversity of people: socialization requires some who are stronger in dialogue to elicit tacit knowledge from the team; externalization requires others who are skilled in representing knowledge in explicit forms; and internalization benefits from those who experiment, test ideas, and learn from experience, with the new concepts or hypotheses arising from combination.

Organizations can also benefit from creative chaos—changes that punctuate states of organizational equilibrium. These states include static presumptions, entrenched mindsets, and established processes that may have lost validity in a changing environment. Rather than destabilizing the organization, the injection of appropriate chaos can bring new-perspective reflection, reassess- ment, and renewal of purpose. Such change can restart tacit-explicit knowledge exchange, where the equilibrium has brought it to a halt.

3.3 An Intelligence Use Case Spiral

We follow a distributed crisis intelligence cell, using networked collaboration tools, through one complete spiral cycle to illustrate the spiral. This case is deliberately chosen because it stresses the spiral (no face-to-face interaction by the necessarily distributed team, very short time to interact, the temporary nature of the team, and no common “organizational” membership), yet illustrates clearly the phases of tacit-explicit exchange and the practical insight into actual intelligence- analysis activities provided by the model.

3.3.1 The Situation

The crisis in small but strategic Kryptania emerged rapidly. Vital national inter- ests—security of U.S. citizens, U.S. companies and facilities, and the stability of the fledgling democratic state—were at stake. Subtle but cascading effects in the environment, economy, and political domains triggered the small political lib- eration front (PLF) to initiate overt acts of terrorism against U.S. citizens, facili- ties, and embassies in the region while seeking to overthrow the fledgling democratic government.

3.3.2 Socialization

Within 10 hours of the team formation, all members participate in an on-line SBU kickoff meeting (same-time, different-place teleconference collaboration) that introduces all members, describes the group’s intelligence charter and procedures, explains security policy, and details the use of the portal/collaboration workspace created for the team. The team leader briefs the current situation and the issues: areas of uncertainly, gaps in knowledge or collection, needs for information, and possible courses of events that must be better understood. The group is allowed time to exchange views and form their own subgroups on areas of contribution that each individual can bring to the problem. Individuals express concepts for new sources for collection and methods of analysis. In this phase, the dialogue of the team, even though not face to face, is invaluable in rapidly establishing trust and a shared vision for the critical task over the ensuing weeks of the crisis.

3.3.3 Externalization

The initial discussions lead to the creation of initial explicit models of the threat that are developed by various team members and posted on the portal for all to see

The team collaboratively reviews and refines these models by updating new versions (annotated by contributors) and suggesting new submodels (or linking these models into supermodels). This externalization process codifies the team’s knowledge (beliefs) and speculations (to be evaluated) about the threat. Once externalized, the team can apply the analytic tools on the portal to search for data, link evidence, and construct hypothesis structures. The process also allows the team to draw on support from resources outside the team to conduct supporting collections and searches of databases for evidence to affirm, refine, or refute the models.

3.3.4 Combination

The codified models become archetypes that represent current thinking—cur- rent prototype hypotheses formed by the group about the threat (who—their makeup; why—their perceptions, beliefs, intents, and timescales; what—their resources, constraints and limitations, capacity, feasible plans, alternative courses of action, vulnerabilities). This prototype-building process requires the group to structure its arguments about the hypotheses and combine evidence to support its claims. The explicit evidence models are combined into higher level explicit explanations of threat composition, capacity, and behavioral patterns.

Initial (tentative) intelligence products are forming in this phase, and the team begins to articulate these prototype products—resulting in alternative hypotheses and even recommended courses of action

3.3.5 Internalization

As the evidentiary and explanatory models are developed on the portal, the team members discuss (and argue) over the details, internally struggling with acceptance or rejection of the validity of the various hypotheses. Individual team members search for confirming or refuting evidence in their own areas of expertise and discuss the hypotheses with others on the team or colleagues in their domain of expertise (often expressing them in the form of stories or metaphors) to experience support or refutation. This process allows the members to further refine and develop internal belief and confidence in the predictive aspects of the models. As accumulating evidence over the ensuing days strengthens (or refutes) the hypotheses, the process continues to internalize those explanations that the team has developed that are most accurate; they also internalize confidence in the sources and collaborative processes that were most productive for this ramp-up phase of the crisis situation.

3.3.6 Socialization

As the group periodically reconvenes, the subject focuses away from “what we must do” to the evidentiary and explanatory models that have been produced. The dialogue turns from issues of startup processes to model-refinement processes. The group now socializes around a new level of the problem: Gaps in the models, new problems revealed by the models, and changes in the evolving crisis move the spiral toward new challenges to create knowledge about vulnerabilities in the PLF and supporting networks, specific locations of black propaganda creation and distribution, finances of certain funding organizations, and identification of specific operation cells within the Kryptanian government.

3.3.7 Summary

This example illustrates the emergent processes of knowledge creation over the several day ramp-up period of a distributed crisis intelligence team.

The full spiral moved from team members socializing to exchange the tacit knowledge of the situation toward the development of explicit representations of their tacit knowledge. These explicit models allowed other supporting resources to be applied (analysts external to the group and online analytic tools) to link further evidence to the models and structure arguments for (or against) the models. As the models developed, team members discussed, challenged, and internalized their understanding of the abstractions, developing confidence and hands-on experience as they tested them against emerging reports and discussed them with team members and colleagues. The confidence and internalized understanding then led to a drive for further dialogue—initializing a second cycle of the spiral.

3.4 Taxonomy of KM

Using the fundamental tacit-explicit distinctions, and the conversion processes of socialization, externalization, internalization, and combination, we can establish a helpful taxonomy of the processes, disciplines, and technologies of the broad KM field applied to the intelligence enterprise. A basic taxonomy that categorizes the breadth of the KM field can be developed by distinguishing three areas of distinct (though very related) activities:

  1. People. The foremost area of KM emphasis is on the development of intellectual capital by people and the application of that knowledge by those people. The principal knowledge-conversion process in this area is socialization, and the focus of improvement is on human operations, training, and human collaborative processes. The basis of collaboration is human networks, known as communities of practice—sharing purpose, values, and knowledge toward a common mission. The barriers that challenge this area of KM are cultural in nature.
  2. Processes. The second KM area focuses on human-computer interaction (HCI) and the processes of externalization and internalization. Tacit-explicit knowledge conversions have required the development of tacit-explicit representation aids in the form of information visuali- zation and analysis tools, thinking aids, and decision support systems. This area of KM focuses on the efficient networking of people and machine processes (such autonomous support processes are referred to as agents) to enable the shared reasoning between groups of people and their agents through computer networks. The barrier to achieving robustness in such KM processes is the difficulty of creating a shared context of knowledge among humans and machines.
  3. Processors. The third KM area is the technological development and implementation of computing networks and processes to enable explicit-explicit combination. Network infrastructures, components, and protocols for representing explicit knowledge are the subject of this fast-moving field. The focus of this technology area is networked computation, and the challenges to collaboration lie in the ability to sustain growth and interoperability of systems and protocols.

 

Because the KM field can also be described by the many domains of expertise (or disciplines of study and practice), we can also distinguish five distinct areas of focus that help describe the field. The first two disciplines view KM as a competence of people and emphasize making people knowledgeable:

  1. Knowledge strategists. Enterprise leaders, such as the chief knowledge officer (CKO), focus on the enterprise mission and values, defining value propositions that assign contributions of knowledge to value (i.e., financial or operational). These leaders develop business models to grow and sustain intellectual capital and to translate that capital into organizational values (e.g., financial growth or organizational performance). KM strategists develop, measure, and reengineer business processes to adapt to the external (business or world) environment.
  2. Knowledge culture developers. Knowledge culture development and sustainment is promoted by those who map organizational knowledge and then create training, learning, and sharing programs to enhance the socialization performance of the organization. This includes the cadre of people who make up the core competencies of the organization (e.g., intelligence analysis, intelligence operations, and collection management). In some organizations a chief learning officer (CLO) is designated this role to oversee enterprise human capital, just as the chief financial officer (CFO) manages (tangible) financial capital.

The next three disciplines view KM as an enterprise capability and emphasize building the infrastructure to make knowledge manageable:

  1. KM applications. Those who apply KM principles and processes to specific business applications create both processes and products (e.g., software application packages) to provide component or end-end serv- ices in a wide variety of areas listed in Table 3.10. Some commercial KM applications have been sufficiently modularized to allow them to be outsourced to application service providers (ASPs) [20] that “package” and provide KM services on a per-operation (transaction) basis. This allows some enterprises to focus internal KM resources on organizational tacit knowledge while outsourcing architecture, infra- structure, tools, and technology.
  2. Enterprise architecture. Architects of the enterprise integrate people, processes, and IT to implement the KM business model. The architecting process defines business use cases and process models to develop requirements for data warehouses, KM services, network infrastructures, and computation.
  3. KM technology and tools. Technologists and commercial vendors develop the hardware and software components that physically implement the enterprise. Table 3.10 provides only a brief summary of the key categories of technologies that make up this broad area that encompasses virtually all ITs.

3.5 Intelligence As Capital

We have described knowledge as a resource (or commodity) and as a process in previous sections. Another important perspective of both the resource and the process is that of the valuation of knowledge. The value (utility or usefulness) of knowledge is first and foremost quantified by its impact on the user in the real world.

the value of intelligence goes far beyond financial considerations in national and MI application. In these cases, the value of knowledge must be measured in its impact on national interests: the warning time to avert a crisis, the accuracy necessary to deliver a weapon, the completeness to back up a policy decision, or the evidential depth to support an organized criminal conviction. Knowledge, as an abstraction, has no intrinsic value—its value is measured by its impact in the real world.

In financial terms, the valuation of the intangible aspects of knowledge is referred to as capital—intellectual capital. These intangible resources include the personal knowledge, skills, processes, intellectual property, and relationships that can be leveraged to produce assets of equal or greater importance than other organizational resources (land, labor, and capital).

What is this capital value in our representative business? It is comprised of four intangible components:

  1. Customer capital. This is the value of established relationships with customers, such as trust and reputation for quality.

Intelligence tradecraft recognizes this form of capital in the form of credibility with consumers—“the ability to speak to an issue with sufficient authority to be believed and relied upon by the intended audience”

  1. Innovation capital. Innovation in the form of unique strategies, new concepts, processes, and products based on unique experience form this second category of capital. In intelligence, new and novel sources and methods for unique problems form this component of intellectual capital.
  2. Process capital. Methodologies and systems or infrastructure (also called structural capital) that are applied by the organization make up its process capital. The processes of collection sources and both collection and analytic methods form a large portion of the intelligence organization’s process (and innovation) capital; they are often fragile (once discovered, they may be forever lost) and are therefore carefully protected.
  3. Human capital. The people, individually and in virtual organizations, comprise the human capital of the organization. Their collective tacit knowledge—expressed as dedication, experience, skill, expertise, and insight—form this critical intangible resource.

O’Dell and Grayson have defined three fundamental categories of value propositions in If Only We Knew What We Know [23]:

  1. Operational excellence. These value propositions seek to boost revenue by reducing the cost of operations through increased operating efficiencies and productivity. These propositions are associated with business process reengineering (BPR), and even business transformation using electronic commerce methods to revolutionize the operational process. These efforts contribute operational value by raising performance in the operational value chain.
  2. Product-to-market excellence. The propositions value the reduction in the time to market from product inception to product launch. Efforts that achieve these values ensure that new ideas move to development and then to product by accelerating the product development process. This value emphasizes the transformation of the business, itself (as explained in Section 1.1).
  3. Customer intimacy. These values seek to increase customer loyalty, customer retention, and customer base expansion by increasing intimacy (understanding, access, trust, and service anticipation) with customers. Actions that accumulate and analyze customer data to reduce selling cost while increasing customer satisfaction contribute to this proposition.

For each value proposition, specific impact measures must be defined to quantify the degree to which the value is achieved. These measures quantify the benefits, and utility delivered to stakeholders. Using these measures, the value added by KM processes can be observed along the sequential processes in the business operation. This sequence of processes forms a value chain that adds value from raw materials to delivered product.

Different kinds of measures are recommended for organizations in transition from legacy business models. During periods of change, three phases are recognized [24]. In the first phase, users (i.e., consumers, collection managers, and analysts) must be convinced of the benefits of the new approach, and the measures include metrics as simple as the number of consumers taking training and beginning to use serv- ices. In the crossover phase, when users begin to transition to the systems, measurers change to usage metrics. Once the system approaches steady-state use, financial-benefit measures are applied. Numerous methods have been defined and applied to describe and quantify economic value, including:

  1. Economic value added (EVA) subtracts cost of capital invested from net operating profit;
  2. Portfolio management approaches treats IT projects as individual investments, computing risks, yields, and benefits for each component of the enterprise portfolio;
  3. Knowledge capital is an aggregate measure of management value added (by knowledge) divided by the price of capital [25];
  4. Intangible asset monitor (IAM) [26] computes value in four categories—tangible capital, intangible human competencies, intangible internal structure, and intangible external structure [27].

The four views of the BSC provide a means of “balancing” the measurement of the major causes and effects of organizational performance but also provide a framework for modeling the organization.

3.6 Intelligence Business Strategy and Models

The commercial community has explored a wide range of business models that apply KM (in the widest sense) to achieve key business objectives. These objectives include enhancing customer service to provide long-term customer satisfaction and retention, expanding access to customers (introducing new products and services, expanding to new markets), increasing efficiency in operations (reduced cost of operations), and introducing new network-based goods and services (eCommerce or eBusiness). All of these objectives can be described by value propositions that couple with business financial performance.

The strategies that leverage KM to achieve these objectives fall into two basic categories. The first emphasizes the use of analysis to understand the value chain from first customer contact to delivery. Understanding the value added to the customer by the transactions (as well as delivered goods and services) allows the producer to increase value to the customer. Values that may be added to intelligence consumers by KM include:

• Service values. Greater value in services are provided to policymakers by anticipating their intelligence needs, earning greater user trust in accuracy and focus of estimates and warnings, and providing more timely delivery of intelligence. Service value is also increased as producers personalize (tailor) and adapt services to the consumer’s interests (needs) as they change.

• Intelligence product values. The value of intelligence products is increased when greater value is “added” by improving accuracy, providing deeper and more robust rationale, focusing conclusions, and building increased consumer confidence (over time).

The second category of strategies (prompted by the eBusiness revolution) seeks to transform the value chain by the introduction of electronic transactions between the customer and retailer. These strategies use network-based advertising, ordering, and even delivery (for information services like banking, investment, and news) to reduce the “friction” of physical-world retailer-customer

These strategies introduce several benefits—all applicable to intelligence:

  • Disintermediation. This is the elimination of intermediate processes and entities between the customer and producer to reduce transaction fric- tion. This friction adds cost and increases the difficulty for buyers to locate sellers (cost of advertising), for buyers to evaluate products (cost of travel and shopping), for buyers to purchase products (cost of sales) and for sellers to maintain local inventories (cost of delivery). The elimination of “middlemen” (e.g., wholesalers, distributors, and local retailers) in eRetailers such as Amazon.com has reduced transaction and intermediate costs and allowed direct transaction and delivery from producer to customer with only the eRetailer in between. The effect of disintermediation in intelligence is to give users greater and more immediate access to intelligence products (via networks such as the U.S. Intelink) and to analysis services via intelligence portals that span all sources of intelligence.
  • Infomediation. The effect of disintermediation has introduced the role of the information broker (infomediary) between customer and seller, providing navigation services (e.g., shopping agents or auctioning and negotiating agents) that act on the behalf of customers [31]. Intelligence communities are moving toward greater cross-functional collection management and analysis, reducing the stovepiped organization of intelligence by collection disciplines (i.e., imagery, signals, and human sources). As this happens, the traditional analysis role requires a higher level of infomediation and greater automation because the analyst is expected (by consumers) to become a broker across a wider range of intelligence sources (including closed and open sources).
  • Customer aggregation. The networking of customers to producers allows rapid analysis of customer actions (e.g., queries for information, browsing through catalogs of products, and purchasing decisions based on information). This analysis enables the producers to better understand customers, aggregate their behavior patterns, and react to (and perhaps anticipate) customer needs. Commercial businesses use these capabilities to measure individual customer patterns and mass market trends to more effectively personalize and target sales and new product developments. Intelligence producers likewise are enabled to analyze warfighter and policymaker needs and uses of intelligence to adapt and tailor products and services to changing security threats.

 

These value chain transformation strategies have produced a simple taxonomy to distinguish eBusiness models into four categories by the level of transaction between businesses and customers

  1. Business to business (B2B). The large volume of trade between businesses (e.g., suppliers and manufacturers) has been enhanced by network-based transactions (releases of specifications, requests for quotations, and bid responses) reducing the friction between suppliers and producers. High-volume manufacturing industries such as the auto- makers are implementing B2B models to increase competition among suppliers and reduce bid-quote-purchase transaction friction.
  2. 2. Business to customer (B2C). Direct networked outreach from producer to consumer has enabled the personal computer (e.g., Dell Computer) and book distribution (e.g., Amazon.com) industries to disintermediate local retailers and reach out on a global scale directly to customers. Similarly, intelligence products are now being delivered (pushed) to consumers on secure electronic networks, via subscription and express order services, analogous to the B2B model.
  3. Customer to business (C2B). Networks also allow customers to reach out to a wider range of businesses to gain greater competitive advantage in seeking products and services.

the introduction of secure intelligence networks and on-line intelligence product libraries (e.g., common operating picture and map and imagery libraries) allows consumers to pull intelligence from a broader range of sources. (This model enables even greater competition between source providers and provides a means of measuring some aspects of intelligence utility based on actual use of product types.)

  1. Customer to customer (C2C). The C2C model automates the mediation process between consumers, enabling consumers to locate those with similar purchasing-selling interests.

3.7 Intelligence Enterprise Architecture and Applications

Just like commercial businesses, intelligence enterprises:

  • Measure and report to stakeholders the returns on investment. These returns are measured in terms of intelligence performance (i.e., knowledge provided, accuracy and timeliness of delivery, and completeness and sufficiency for decision making) and outcomes (i.e., effects of warnings provided, results of decisions based on knowledge delivered, and utility to set long-term policies).
  • Service customers, the intelligence consumers. This is done by providing goods (intelligence products such as reports, warnings, analyses, and target folders) and services (directed collections and analyses or tailored portals on intelligence subjects pertinent to the consumers).
  • Require intimate understanding of business operations and must adapt those operations to the changing threat environment, just as businesses must adapt to changing markets.
  • Manage a supply chain that involves the anticipation of future needs of customers, the adjustment of the delivery of raw materials (intelligence collections), the production of custom products to a diverse customer base, and the delivery of products to customers just in time [33].

3.7.1 Customer Relationship Management

CRM processes that build and maintain customer loyalty focus on managing the relationship between provider and consumer. The short-term goal is customer satisfaction; the long-term goal is loyalty. Intelligence CRM seeks to provide intelligence content to consumers that anticipates their needs, focuses on the specific information that supports their decision making, and provides drill down to supporting rationale and data behind all conclusions. In order to accomplish this, the consumer-producer relationship must be fully described in models that include:

  • Consumer needs and uses of intelligence—applications of intelligence for decision making, key areas of customer uncertainty and lack of knowledge, and specific impact of intelligence on the consumer’s decision making;
  • Consumer transactions—the specific actions that occur between the enterprise and intelligence consumers, including urgent requests, subscriptions (standing orders) for information, incremental and final report deliveries, requests for clarifications, and issuances of alerts.

CRM offers the potential to personalize intelligence delivery to individual decision makers while tracking their changing interests as they browse subject offerings and issue requests through their own custom portals.

3.7.2 Supply Chain Management

The SCM function monitors and controls the flow of the supply chain, providing internal control of planning, scheduling, inventory control, processing, and delivery.

SCM is the core of B2B business models, seeking to integrate front-end suppliers into an extended supply chain that optimizes the entire production process to slash inventory levels, improve on-time delivery, and reduce the order-to-delivery (and payment) cycle time. In addition to throughput efficiency, the B2B models seek to aggregate orders to leverage the supply chain to gain greater purchasing power, translating larger orders to reduced prices. The key impact measures sought by SCM implementations include:

  • Cash-to-cash cycle time (time from order placement to delivery/ payment);
  • Delivery performance (percentage of orders fulfilled on or before request date);
  • Initial fill rate (percentage of orders shipped in supplier’s first ship- ment);
  • Initial order lead time (supplier response time to fulfill order);
  • On-time receipt performance (percentage of supplier orders received on time).

Like the commercial manufacturer, the intelligence enterprise operates a supply chain that “manufactures” all-source intelligence products from raw sources of intelligence data and relies on single-source suppliers (i.e., imagery, signals, or human reports).

3.7.3 Business Intelligence

The BI function provides all levels of the organization with relevant information on internal operations and the external business environment (via marketing) to be exploited (analyzed and applied) to gain a competitive advantage. The BI function serves to provide strategic insight into overall enterprise operations based on ready access to operating data.

The emphasis of BI is on explicit data capture, storage, and analysis; through the 1990s, BI was the predominant driver for the implementation of corporate data warehouses, and the development of online analytic processing (OLAP) tools. (BI preceded KM concepts, and the subsequent introduction of broader KM concepts added the complementary need for capture and analysis of tacit and explicit knowledge throughout the enterprise.)

The intelligence BI function should collect and analyze real- time workflow data to provide answers to questions such as:

  • What are the relative volumes of requests (for intelligence) by type?
  • What is the “cost” of each category of intelligence product?
  • What are the relative transaction costs of each stage in the supply chain?
  • What are the trends in usage (by consumers) of all forms of intelligence over the past 12 months? Over the past 6 months? Over the past week?
  • Which single sources of incoming intelligence (e.g., SIGINT, IMINT, and MASINT) have greatest utility in all-source products, by product category?

Like their commercial counterparts, the intelligence BI function should not only track the operational flows, they should also track the history of operational decisions—and their effects.

Both operational and decision-making data should be able to be conveniently navigated and analyzed to provide timely operational insight to senior leadership who often ask the question, “What is the cost of a pound of intelligence?”

3.8 Summary

KM provides a strategy and organizational discipline for integrating people, processes, and IT into an effective enterprise.

as noted by Tom Davenport, a leading observer of the discipline:

The first generation of knowledge management within enterprises emphasized the “supply side” of knowledge: acquisition, storage, and dissemination of business operations and customer data. In this phase knowledge was treated much like physical resources and implementation approaches focused on building “warehouses” and “channels” for supply processing and distribution. This phase paid great attention to systems, technology and infrastructure; the focus was on acquiring, accumulating and distributing explicit knowledge in the enterprise [35].

Second generation KM emphasis has turned attention to the demand side of the knowledge economy—seeking to identify value in the collected data to allow the enterprise to add value from the knowledge base, enhance the knowledge spiral, and accelerate innovation. This generation has brought more focus to people (the organization) and the value of tacit knowledge; the issues of sustainable knowledge creation and dissipation throughout the organization are emphasized in this phase. The attention in this generation has moved from understanding knowledge systems to understanding knowledge workers. The third generation to come may be that of KM innovation, in which the knowledge process is viewed as a complete life cycle within the organization, and the emphasis will turn to revolutionizing the organization and reducing the knowledge cycle time to adapt to an ever-changing world environment

 

4

The Knowledge-Based Intelligence Organization

National intelligence organizations following World War II were characterized by compartmentalization (insulated specialization for security purposes) that required individual learning, critical analytic thinking, and problem solving by small, specialized teams working in parallel (stovepipes or silos). These stovepipes were organized under hierarchical organizations that exercised central control. The approach was appropriate for the centralized organizations and bipolar security problems of the relatively static Cold War, but the global breadth and rapid dynamics of twenty-first century intelligence problems require more agile networked organizations that apply organization-wide collaboration to replace the compartmentalization of the past. Founded on the virtues of integrity and trust, the disciplines of organizational collaboration, learning, and problem solving must be developed to support distributed intelligence collection, analysis, and production.

This chapter focuses on the most critical factor in organizational knowl- edge creation—the people, their values, and organizational disciplines. The chapter is structured to proceed from foundational virtues, structures, and com- munities of practice (Section 4.1) to the four organizational disciplines that sup- port the knowledge creation process: learning, collaboration, problem solving, and best practices—called intelligence tradecraft.

the people perspective of KM presented in this chapter can be contrasted with the process and technology perspectives (Table 4.1) five ways:

  1. Enterprise focus. The focus is on the values, virtues, and mission shared by the people in the organization.
  2. Knowledge transaction. Socialization, the sharing of tacit knowledge by methods such as story and dialogue, is the essential mode of transac- tion between people for collective learning, or collaboration to solve problems.
  3. The basis for human collaboration lies in shared pur- pose, values, and a common trust.
  4. A culture of trust develops communities that share their best practices and experiences; collaborative problem solving enables the growth of the trusting culture.
  5. The greatest barrier to collaboration is the inability of an organization’s culture to transform and embrace the sharing of values, virtues, and disciplines.

The numerous implementation failures of early-generation KM enterprises have most often occurred because organizations have not embraced the new business models introduced, nor have they used the new systems to collaborate. As a result, these KM implementations have failed to deliver the intellectual capital promised. These cases were generally not failures of process, technology, or infrastructure; rather, they were failures of organizational culture change to embrace the new organizational model. In particular, they failed to address the cultural barriers to organizational knowledge sharing, learning, and problem solving.

Numerous texts have examined these implementation challenges, and all have emphasized that organizational transformation must precede KM system implementations.

4.1 Virtues and Disciplines of the Knowledge-Based Organization

At the core of an agile knowledge-based intelligence organization is the ability to sustain the creation of organizational knowledge through learning and collaboration. Underlying effective collaboration are values and virtues that are shared by all. The U.S. IC, recognizing the need for such agility as its threat environment changes, has adopted knowledge-based organizational goals as the first two of five objectives in its Strategic Intent:

  • Unify the community through collaborative processes. This includes the implementation of training and business processes to develop an inter-agency collaborative culture and the deployment of supporting technologies.
  • Invest in people and knowledge. This area includes the assessment of customer needs and the conduct of events (training, exercises, experiments, and conferences/seminars) to develop communities of practice and build expertise in the staff to meet those needs. Supporting infrastructure developments include the integration of collaborative networks and shared knowledge bases.

Clearly identified organizational propositions of values and virtues (e.g., integrity and trust) shared by all enable knowledge sharing—and form the basis for organizational learning, collaboration, problem solving, and best-practices (intelligence tradecraft) development introduced in this chapter. This is a necessary precedent before KM infrastructure and technology is introduced to the organization. The intensely human values, virtues, and disciplines introduced in the following sections are essential and foundational to building an intelligence organization whose business processes are based on the value of shared knowledge.

4.1.1 Establishing Organizational Values and Virtues

The foundation of all organizational discipline (ordered, self-controlled, and structured behavior) is a common purpose and set of values shared by all. For an organization to pursue a common purpose, the individual members must conform to a common standard and a common set of ideals for group conduct.

The knowledge-based intelligence organization is a society that requires virtuous behavior of its members to enable collaboration. Dorothy Leonard-Barton, in Wellsprings of Knowledge, distinguishes two categories of values: those that relate to basic human nature and those that relate to performance of the task. In the first category are big V values (also called moral virtues) that include basic human traits such as personal integrity (consistency, honesty, and reliability), truthfulness, and trustworthiness. For the knowledge worker’s task, the second category (of little v values) includes those values long sought by philosophers to arrive at knowledge or justify true belief. Some epistemologies define intellectual virtue as the foundation of knowledge: Knowledge is a state of belief arising out of intellectual virtue. Intellectual virtues include organizational conformity to a standard of right conduct in the exchange of ideas, in reasoning and in judgment.

Organizational integrity is dependent upon the individual integrity of all contributor—as participants cooperate and collaborate around a central purpose, the virtue of trust (built upon shared trust- worthiness of individuals) opens the doors of sharing and exchange. Essential to this process is the development of networks of conversations that are built on communication transactions (e.g., assertions, declarations, queries, or offers) that are ultimately based in personal commitments. Ultimately, the virtue of organizational wisdom—seeking the highest goal by the best means—must be embraced by the entire organization recognizing a common purpose.

Trust and cooperative knowledge sharing must also be complemented by an objective openness. Groups that place consensus over objectivity become subject to certain dangerous decision-making errors.

4.1.2 Mapping the Structures of Organizational Knowledge

Every organization has a structure and flow of knowledge—a knowledge environment or ecology (emphasizing the self-organizing and balancing characteristics of organizational knowledge networks). The overall process of studying and characterizing this environment is referred to as mapping—explicitly rep- resenting the network of nodes (competencies) and links (relationships, knowledge flow paths) within the organization. The fundamental role of KM organizational analysis is the mapping of knowledge within an existing organization.

the knowledge mapping identifies the intangible tacit assets of the organization. The mapping process is conducted by a variety of means: passive observation (where the analyst works within the community), active interviewing, formal questionnaires, and analysis. As an ethnographic research activity, the mapping analyst seeks to understand the unspoken, informal flows and sources of knowledge in the day-to-day operations of the organization. The five stages of mapping (Figure 4.1) must be conducted in partnership with the owners, users, and KM implementers.

The first phase is the definition of the formal organization chart—the for- mal flows of authority, command, reports, intranet collaboration, and information systems reporting. In this phase, the boundaries, or focus of mapping interest is established. The second phase audits (identifies, enumerates, and quantifies as appropriate) the following characteristics of the organization:

  1. Knowledge sources—the people and systems that produce and articulate knowledge in the form of conversation, developed skills, reports, implemented (but perhaps not documented) processes, and databases.
  2. Knowledge flowpaths—the flows of knowledge, tacit and explicit, for- mal and informal. These paths can be identified by analyzing the transactions between people and systems; the participants in the trans- actions provide insight into the organizational network structure by which knowledge is created, stored, and applied. The analysis must distinguish between seekers and providers of knowledge and their relationships (e.g., trust, shared understanding, or cultural compatibility) and mutual benefits in the transaction.
  3. Boundaries and constraints—the boundaries and barriers that control, guide, or constrict the creation and flow of knowledge. These may include cultural, political (policy), personal, or electronic system characteristics or incompatibilities.
  4. Knowledge repositories—the means of maintaining organizational knowledge, including tacit repositories (e.g., communities of experts that share experience about a common practice) and explicit storage (e.g., legacy hardcopy reports in library holdings, databases, or data warehouses).

Once audited, the audit data is organized in the third phase by clustering the categories of knowledge, nodes (sources and sinks), and links unique to the organization. The structure of this organization, usually a table or a spreadsheet, provides insight into the categories of knowledge, transactions, and flow paths; it provides a format to review with organization members to convey initial results, make corrections, and refine the audit. This phase also provides the foundation for quantifying the intellectual capital of the organization, and the audit categories should follow the categories of the intellectual capital accounting method adopted.

The fourth phase, mapping, transforms the organized data into a structure (often, but not necessarily, graphical) that explicitly identifies the current knowledge network. Explicit and tacit knowledge flows and repositories are distinguished, as well as the social networks that support them. This process of visualizing the structure may also identify clusters of expertise, gaps in the flows, chokepoints, as well as areas of best (and worst) practices within the network.

Once the organization’s current structure is understood, the structure can be compared to similar structures in other organizations by benchmarking in the final phase. Benchmarking is the process of identifying, learning, and adapting outstanding practices and processes from any organization, anywhere in the world, to help an organization improve its performance. Benchmarking gathers the tacit knowledge—the know-how, judgments, and enablers—that explicit knowledge often misses. This process allows the exchange of quantitative performance data and qualitative best-practice knowledge to be shared and com- pared with similar organizations to explore areas for potential improvement and potential risks.

Because the repository provides a pointer to the originating authors, it also provides critical pointers to people, or a directory that identifies people within the agency with experience and expertise by subject

4.1.3 Identifying Communities of Organizational Practice

A critical result of any mapping analysis is the identification of the clusters of individuals who constitute formal and informal groups that create, share, and maintain tacit knowledge on subjects of common interest.

The functional workgroup benefits from stability, established responsibilities, processes and storage, and high potential for sharing. Functional workgroups provide the high-volume knowledge production of the organization but lack the agility to respond to projects and crises.

Cross-functional project teams are shorter term project groups that can be formed rapidly (and dismissed just as rapidly) to solve special intelligence problems, maintain special surveillance watches, prepare for threats, or respond to crises. These groups include individuals from all appropriate functional disciplines—with the diversity often characteristic of the makeup of the larger organization, but on a small scale—with reach back to expertise in functional departments.

M researchers have recognized that such organized commu- nities provide a significant contribution to organizational learning by providing a forum for:

  • Sharing current problems and issues;
  • Capturing tacit experience and building repositories of best practices;
  • Linking individuals with similar problems, knowledge, and experience;
  • Mentoring new entrants to the community and other interested parties.

Because participation in communities of practice is based on individual interest, not organizational assignment, these communities may extend beyond the duration of temporary assignments and cut across organizational boundaries.

The activities of working, learning, and innovating have traditionally been treated as independent (and conflicting) activities performed in the office, in the classroom, and in the lab. However, studies by John Seely Brown, chief scientist of Xerox PARC, have indicated that once these activities are unified in communities of practice, they have the potential to significantly enhance knowledge transfer and creation.

4.1.4 Initiating KM Projects

The knowledge mapping and benchmarking process must precede implementation of KM initiatives, forming the understanding of current competencies and processes and the baseline for measuring any benefits of change. KM implementation plans within intelligence organizations generally consider four components, framed by the kind of knowledge being addressed and the areas of investment in KM initiatives:

  1. Organizational competencies. The first area includes assessment of workforce competencies and forms the basis of an intellectual capital audit of human capital. This area also includes the capture of best practices (the intelligence business processes, or tradecraft) and the development of core competencies through training and education. This assessment forms the basis of intellectual capital audit.
  2. Social collaboration. Initiatives in this area enforce established face-to-face communities of practice and develop new communities. These activities enhance the socialization process through meetings and media (e.g., newsletters, reports, and directories).
  3. KM networks. Infrastructure initiatives implement networks (e.g., corporate intranets) and processes (e.g., databases, groupware, applications, and analytic tools) to provide for the capture and exchange of explicit knowledge.
  4. Virtual collaboration. The emphasis in this area is applying technology to create connectivity among and between communities of practice. Intranets and collaboration groupware (discussed in Section 4.3.2) enable collaboration at different times and places for virtual teams—and provide the ability to identify and introduce communities with similar interests that may be unaware of each other.

4.1.5 Communicating Tacit Knowledge by Storytelling

The KM community has recognized the strength of narrative communication—dialogue and storytelling—to communicate the values, emotion (feelings, passion), and sense of immersed experience that makeup personalized, tacit knowledge.

 

The introduction of KM initiatives can bring significant organizational change because it may require cultural transitions in several areas:

  • Changes in purpose, values, and collaborative virtues;
  • Construction of new social networks of trust and communication;
  • Organizational structure changes (networks replace hierarchies);
  • Business process agility, resulting a new culture of continual change (training to adopt new procedures and to create new products).

All of these changes require participation by the workforce and the communication of tacit knowledge across the organization.

Storytelling provides a complement to abstract, analytical thinking and communication, allowing humans to share experience, insight, and issues (e.g., unarticulated concerns about evidence expressed as “negative feelings,” or general “impressions” about repeated events not yet explicitly defined as threat patterns).

The organic school of KM that applies storytelling to cultural transformation emphasizes a human behavioral approach to organizational socialization, accepting the organization as a complex ecology that may be changed in a large way by small effects.

These effects include the use of a powerful, effective story that communicates in a way that spreads credible tacit knowledge across the entire organization.

This school classifies tacit knowledge into artifacts, skills, heuristics, experience, and natural talents (the so-called ASHEN classification of tacit knowledge) and categorizes an organizations’ tacit knowledge in these classes to understand the flow within informal communities.

Nurturing informal sharing within secure communities of practice and distinguishing such sharing from formal sharing (e.g., shared data, best practices, or eLearning) enables the rich exchange of tacit knowledge when creative ideas are fragile and emergent.

4.2 Organizational Learning

Senge asserted that the fundamental distinction between traditional controlling organizations and adaptive self-learning organizations are five key disciplines including both virtues (commitment to personal and team learning, vision shar- ing, and organizational trust) and skills (developing holistic thinking, team learning, and tacit mental model sharing). Senge’s core disciplines, moving from the individual to organizational disciplines, included:

• Personal mastery. Individuals must be committed to lifelong learning toward the end of personal and organization growth. The desire to learn must be to seek a clarification of one’s personal vision and role within the organization.

• Systems thinking. Senge emphasized holistic thinking, the approach for high-level study of life situations as complex systems. An element of learning is the ability to study interrelationships within complex dynamic systems and explore and learn to recognize high-level patterns of emergent behavior.

• Mental models. Senge recognized the importance of tacit knowledge (mental, rather than explicit, models) and its communication through the process of socialization. The learning organization builds shared mental models by sharing tacit knowledge in the storytelling process and the planning process. Senge emphasized planning as a tacit- knowledge sharing process that causes individuals to envision, articulate, and share solutions—creating a common understanding of goals, issues, alternatives, and solutions.

• Shared vision. The organization that shares a collective aspiration must learn to link together personal visions without conflicts or competition, creating a shared commitment to a common organizational goal set.

• Team learning. Finally, a learning organization acknowledges and understands the diversity of its makeup—and adapts its behaviors, pat- terns of interaction, and dialogue to enable growth in personal and organizational knowledge.

It is important, here, to distinguish the kind of transformational learning that Senge was referring to (which brings cultural change across an entire organization), from the smaller scale group learning that takes place when an intelligence team or cell conducts a long-term study or must rapidly “get up to speed” on a new subject or crisis.

4.2.1 Defining and Measuring Learning

The process of group learning and personal mastery requires the development of both reasoning and emotional skills. The level of learning achievement can be assessed by the degree to which those skills have been acquired.

The taxonomy of cognitive and affective skills can be related to explicit and tacit knowledge categories, respectively, to provide a helpful scale for measuring the level of knowledge achieved by an individual or group on a particular subject.

4.2.2 Organizational Knowledge Maturity Measurement

The goal of organizational learning is the development of maturity at the organizational level—a measure of the state of an organization’s knowledge about its domain of operations and its ability to continuously apply that knowledge to increase corporate value to achieve business goals.

Carnegie-Mellon University Software Engineering Institute has defined a five-level People Capability Maturity Model® (P-CMM ®) that distinguishes five levels of organizational maturity, which can be measured to assess and quantify the maturity of the workforce and its organizational KM performance. The P-CMM® framework can be applied, for example, to an intelligence organization’s analytic unit to measure current maturity and develop strategy to increase to higher levels of performance. The levels are successive plateaus of practice, each building on the preceding foundation.

4.2.3 Learning Modes

4.2.3.1 Informal Learning

We gain experience by informal modes of learning on the job alone, with men- tors, team members, or while mentoring others. The methods of informal learning are as broad as the methods of exchanging knowledge introduced in the last chapter. But the essence of the learning organization is the ability to translate what has been learned into changed organizational behavior. David Garvin has identified five fundamental organizational methodologies that are essential to implementing the feedback from learning to change; all have direct application in an intelligence organization.

  1. Systematic problem solving. Organizations require a clearly defined methodology for describing and solving problems, and then for implementing the solutions across the organization. Methods for acquiring and analyzing data, synthesizing hypothesis, and testing new ideas must be understood by all to permit collaborative problem solving. The process must also allow for the communication of lessons learned and best practices developed (the intelligence tradecraft) across the organization.
  2. Experimentation. As the external environment changes, the organization must be enabled to explore changes in the intelligence process. This is done by conducting experiments that take excursions from the normal processes to attack new problems and evaluate alternative tools and methods, data sources, or technologies. A formal policy to encourage experimentation, with the acknowledgment that some experiments will fail, allows new ideas to be tested, adapted, and adopted in the normal course of business, not as special exceptions. Experimentation can be performed within ongoing programs (e.g., use of new analytic tools by an intelligence cell) or in demonstration programs dedicated to exploring entirely new ways of conducting analysis (e.g., the creation of a dedicated Web-based pilot project independent of normal operations and dedicated to a particular intelligence subject domain).
  3. Internal experience. As collaborating teams solve a diversity of intelligence problems, experimenting with new sources and methods, the lessons that are learned must be exchanged and applied across the organization. This process of explicitly codifying lessons learned and making them widely available for others to adopt seems trivial, but in practice requires significant organizational discipline. One of the great values of communities of common practice is their informal exchange of lessons learned; organizations need such communities and must support formal methods that reach beyond these communities. Learning organizations take the time to elicit the lessons from project teams and explicitly record (index and store) them for access and application across the organization. Such databases allow users to locate teams with similar problems and lessons learned from experimentation, such as approaches that succeeded and failed, expected performance levels, and best data sources and methods.
  4. External sources of comparison. While the lessons learned just described applied to self learning, intelligence organizations must look to external sources (in the commercial world, academia, and other cooperating intelligence organizations) to gain different perspectives and experiences not possible within their own organizations. A wide variety of methods can be employed to secure the knowledge from external perspectives, such as making acquisitions (in the business world), establishing strategic relationships, the use of consultants, establishing consortia. The process of sharing, then critically comparing qualitative and quantitative data about processes and performance across organizations (or units within a large organization), enables leaders and process owners to objectively review the relative effectiveness of alter- native approaches. Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices and processes found inside and outside the organization [23]. The benchmarking process is an analytic process that requires compared processes to be modeled, quantitatively measured, deeply understood, and objectively evaluated. The insight gained is an understanding of how best performance is achieved; the knowledge is then leveraged to predict the impact of improvements on over- all organizational performance.
  5. Transferring knowledge. Finally, an intelligence organization must develop the means to transfer people (tacit transfer of skills, experience, and passion by rotation, mentoring, and integrating process teams) and processes (explicit transfer of data, information, business processes on networks) within the organization. In Working Knowledge [24], Davenport and Prusak point out that spontaneous, unstructured knowledge exchange (e.g., discussions at the water cooler, exchanges among informal communities of interest, and discussions at periodic knowledge fairs) is vital to an organization’s success, and the organization must adopt strategies to encourage such sharing.

4.2.3.2 Formal Learning

In addition to informal learning, formal modes provide the classical introduc- tion to subject-matter knowledge.

Information technologies have enabled four distinct learning modes that are defined by distinguishing both the time and space of interaction between the learner and the instructor

  1. Residential learning (RL). Traditional residential learning places the students and instructor in the physical classroom at the same time and place. This proximity allows direct interaction between the student and instructor and allows the instructor to tailor the material to the students.
  2. Distance learning remote (DL-remote). Remote distance learning pro- vides live transmission of the instruction to multiple, distributed locations. The mode effectively extends the classroom across space to reach a wider student audience. Two-way audio and video can permit limited interaction between extended classrooms and the instructor.
  3. Distance learning canned (DL-canned). This mode simply packages (or cans) the instruction in some media for later presentation at the student’s convenience (e.g., traditional hardcopy texts, recorded audio or video, or softcopy materials on compact discs) DL-canned materials include computer-based training courseware that has built-in features to interact with the student to test comprehension, adaptively present material to meet a student’s learning style, and link to supplementary materials to the Internet.
  4. Distance learning collaborative (DL-collaborative). The collaborative mode of learning (often described as e-learning) integrates canned material while allowing on-line asynchronous interaction between the student and the instructor (e.g., via e-mail, chat, or videoconference). Collaboration may also occur between the student and software agents (personal coaches) that monitor progress, offer feedback, and recommend effective paths to on-line knowledge.

4.3 Organizational Collaboration

The knowledge-creation process of socialization occurs as communities (or teams) of people collaborate (commit to communicate, share, and diffuse knowledge) to achieve a common purpose.

Collaboration is a stronger term than cooperation because participants are formed around and committed to a com- mon purpose, and all participate in shared activity to achieve the end. If a problem is parsed into independent pieces (e.g., financial analysis, technology analysis, and political analysis), cooperation may be necessary—but not collabo- ration. At the heart of collaboration is intimate participation by all in the creation of the whole—not in cooperating to merely contribute individual parts to the whole.

 

Collaboration is widely believed to have the potential to perform a wide range of functions together:

  • Coordinate tasking and workflow to meet shared goals;
  • Share information, beliefs, and concepts;
  • Perform cooperative problem-solving analysis and synthesis;
  • Perform cooperative decision making;
  • Author team reports of decisions and rationale.

This process of collaboration requires a team (two or more) of individuals that shares a common purpose, enjoys mutual respect and trust, and has an established process to allow the collaboration process to take place. Four levels (or degrees) of intelligence collaboration can be distinguished, moving toward increasing degrees of interaction and dependence among team members

Sociologists have studied the sequence of collaborative groups as they move from inception to decision commitment. Decision emergence theory (DET) defines four stages of collaborative decision making within an individual group: orientation of all members to a common perspective; conflict, during which alternatives are compared and competed; emergence of collaborative alternatives; and finally reinforcement, when members develop consensus and commitment to the group decisions.

4.3.1 Collaborative Culture

First among the means to achieve collaboration is the creation of a collaborating culture—a culture that shares the belief that collaboration (as opposed to competition or other models) is the best approach to achieve a shared goal and that shares a commitment to collaborate to achieve organizational goals.

The collaborative culture must also recognize that teams are heterogeneous in nature. Team members have different tacit (experience, personality style) and cognitive (reasoning style) preferences that influence their unique approach to participating in the collaborative process.

The mix of personalities within a team must be acknowledged and rules of collaborative engagement (and even groupware) must be adapted to allow each member to contribute within the constraints and strengths of their individual styles.

Collaboration facilitators may use Myers-Brigg or other categorization schemes to analyze a particular team’s structure to assess the team’s strengths, weaknesses and overall balance

4.3.2 Collaborative Environments

Collaborative environments describe the physical, temporal, and functional setting within which organizations interact.

4.3.3 Collaborative Intelligence Workflow

The representative team includes:

• Intelligence consumer. The State Department personnel requesting the analysis define high-level requirements and are the ultimate customers for the intelligence product. They specify what information is needed: the scope or breadth of coverage, the level of depth, the accuracy required, and the timeframe necessary for policy making.

• All-source analytic cell. The all-source analysis cell, which may be a dis- tributed virtual team across several different organizations, has the responsibility to produce the intelligence product and certify its accuracy.

• Single-source analysts. Open-source and technical-source analysts (e.g., imagery, signals, or MASINT) are specialists that analyze the raw data collected as a result of special tasking; they deliver reports to the all- source team and certify the conclusions of special analysis.

• Collection managers. The collection managers translate all-source requests for essential information (e.g., surveillance of shipping lines, identification of organizations, or financial data) into specific collection tasks (e.g., schedules, collection parameters, and coordination between different sources). They provide the all-source team with a status of their ability to satisfy the team’s requests.

4.3.3.3 The Collaboration Paths

  1. Problem statement.

Interacting with the all-source analytic leader (LDR)—and all-source analysts on the analytic team—the problem is articulated in terms of scope (e.g., area of world, focus nations, and expected depth and accuracy of estimates), needs (e.g., specific questions that must be answered and pol- icy issues) urgency (e.g., time to first results and final products), and expected format of results (e.g., product as emergent results portal or softcopy document).

  1. Problem refinement. The analytic leader (LDR) frames the problem with an explicit description of the consumer requirements and intelligence reporting needs. This description, once approved by the consumer, forms the terms of reference for the activity. The problem statement-refinement loop may be iterated as the situation changes or as intelligence reveals new issues to be studied.
  2. Information requests to collection tasking. Based on the requirements, the analytic team decomposes the problem to deduce specific elements of information needed to model and understand the level of trafficking. (The decomposition process was described earlier in Section 2.4.) The LDR provides these intelligence data requirements to the collec- tion manger (CM) to prepare a collection plan. This planning requires the translation of information needs to a coordinated set of data- collection tasks for humans and technical collection systems. The CM prepares a collection plan that traces planned collection data and means to the analytic team’s information requirements.
  3. Collection refinement. The collection plan is fed back to the LDR to allow the analytic team to verify the completeness and sufficiency of the plan—and to allow a review of any constraints (e.g., limits to coverage, depth, or specificity) or the availability of previously collected relevant data. The information request–collection planning and refinement loop iterates as the situation changes and as the intelligence analysis proceeds. The value of different sources, the benefits of coordinated collection, and other factors are learned by the analytic team as the analysis proceeds, causing adjustments to the collection plan to satisfy information needs.
  4. Cross cueing. The single-source analysts acquire data by searching exist- ing archived data and open sources and by receiving data produced by special collections tasked by the CM. Single-source analysts perform source-unique analysis (e.g., imagery analysis; open-source foreign news report, broadcast translation, and analysis; and human report analysis) As the single-source analysts gain an understanding of the timing of event data, and the relationships between data observed across the two domains, the single-source analysts share these temporal and functional relationships. The cross-cueing collaboration includes one analyst cueing the other to search for corroborating evidence in another domain; one analyst cueing the other to a possible correlated event; or both analysts recommending tasking for the CM to coordinate a special collection to obtain time or functionally correlated data on a specific target. It is important to note that this cross-cueing collaboration, shown here at the single-source analysis level function is also performed within the all-source analysis unit (8), where more subtle cross-source relations may be identified.
  5. Single-source analysis reporting. Single-source analysts report the interim results of analysis to the all-source team, describing the emerging picture of the trafficking networks as well as gaps in information. This path provides the all-source team with an awareness of the progress and contribution of collections, and the added value of the analysis that is delivering an emerging trafficking picture.
  6. Single-source analysis refinement. The all-source team can provide direction for the single-source analysts to focus (“Look into that organization in greater depth”), broaden (“Check out the neighboring countries for similar patterns”), or change (“Drop the study of those shipping lines and focus on rail transport”) the emphasis of analysis and collection as the team gains a greater understanding of the subject. This reporting-refinement collaboration (paths 6 and 7, respectively) precedes publication of analyzed data (e.g., annotated images, annotated foreign reports on trafficking, maps of known and suspect trafficking routes, and lists of known and suspect trafficking organizations) into the analysis base.
  7. All-source analysis collaboration. The all-source team may allocate components of the trafficking-analysis task to individuals with areas of subject matter specialties (e.g., topical components might include organized crime, trafficking routes, finances, and methods), but all contribute to the construction of a single picture of illegal trafficking. The team shares raw and analyzed data in the analysis base, as well as the intelligence products in progress in a collaborative workspace. The LDR approves all product components for release onto the digital production system, which places them onto the intelligence portal for the consumer.

In the initial days, the portal is populated with an initial library of related subject matter data (e.g., open source and intelligence reports and data on illegal trafficking in general). As the analysis proceeds, analytic results are posted to the portal,

4.4 Organizational Problem Solving

Intelligence organizations face a wide range of problems that require planning, searching, and explanation to provide solutions. These problems require reactive solution strategies to respond to emergent situations as well as opportunistic (proactive) strategies to identify potential future problems to be solved (e.g., threat assessments, indications, and warnings).

The process of solving these problems collaboratively requires a defined strategy for groups to articulate a problem and then proceed to collectively develop a solution. In the context of intelligence analysis, organizational problem solving focuses on the following kinds of specific problems:

  • Planning. Decomposing intelligence needs for data requirements, developing analysis-synthesis procedures to apply to the collected data to draw conclusions, and scheduling the coordinated collection of data to meet those requirements
  • Discovery. Searching and identifying previously unknown patterns (of objects, events, behaviors, or relationships) that reveal new understanding about intelligence targets. (The discovery reasoning approach is inductive in nature, creating new, previously unrevealed hypotheses.)
  • Detection. Searching and matching evidence against previously known target hypotheses (templates). (The detection reasoning approach is deductive in nature, testing evidence against known hypotheses.)
  • Explanation. Estimating (providing mathematical proof in uncertainty) and arguing (providing logical proof in uncertainty) are required to provide an explanation of evidence. Inferential strategies require the description of multiple hypotheses (explanations), the confidence in each one, and the rationale for justifying a decision. Problem-solving descriptions may include the explanation of explicit knowledge via technical portrayals (e.g., graphical representations) and tacit knowledge via narrative (e.g., dialogue and story).

To perform organizational (or collaborative) problem solving in each of these areas, the individuals in the organization must share an awareness of the reasoning and solution strategies embraced by the organization. In each of these areas, organizational training, formal methodologies, and procedural templates provide a framework to guide the thinking process across a group. These methodologies also form the basis for structuring collaboration tools to guide the way teams organize shared knowledge, structure problems, and proceed from problem to solution.

Collaborative intelligence analysis is a difficult form of collaborative problem solving, where the solution often requires the analyst to overcome the efforts of a subject of study (the intelligence target) to both deny the analyst information and provide deliberately deceptive information.

4.4.1 Critical, Structured Thinking

Critical, or structured, thinking is rooted in the development of methods of careful, structured thinking, following the legacy of the philosophers and theologians that diligently articulated their basis for reasoning from premises to conclusions.

Critical thinking is based on the application of a systematic method to guide the collection of evidence, reason from evidence to argument, and apply objective decision-making judgment (Table 4.10). The systematic methodology assures completeness (breadth of consideration), objectivity (freedom from bias in sources, evidence, reasoning, or judgment), consistency (repeatability over a wide range of problems), and rationality (consistency with logic). In addition, critical thinking methodology requires the explicit articulation of the reasoning process to allow review and critique by others. These common methodologies form the basis for academic research, peer review, and reporting—as well as for intelligence analysis and synthesis.

structured methods that move from problem to solution provide a helpful common framework for groups to communicate knowledge and coordi- nate a process from problem to solution. The TQM initiatives of the 1980s expanded the practice of teaching entire organizations common strategies for articulating problems and moving toward solutions. A number of general problem-solving strategies have been developed and applied to intelligence applications, for example (moving from general to specific):

  • Kepner-TregoeTM. This general problem-solving methodology, introduced in the classic text The Rational Manager [38] and taught to generations of managers in seminars, has been applied to management, engineering, and intelligence-problem domains. This method carefully distinguishes problem analysis (specifying deviations from expectations, hypothesizing causes, and testing for probable causes) and decision analysis (establishing and classifying decision objectives, generating alternative decisions, and comparing consequences).
  • Multiattribute utility analysis (MAUA). This structured approach to decision analysis quantifies a utility function, or value of all decision factors, as a weighted sum of contributing factors for each alternative decision. Relative weights of each factor sum to unity so the overall utility scale (for each decision option) ranges from 0 to 1.
  • Alternative competing hypotheses (ACH). This methodology develops and organizes alternative hypotheses to explain evidence, evaluates the evidence across multiple criteria, and provides rationale for reasoning to the best explanation.
  • Lockwood analytic method for prediction (LAMP). This methodology exhaustively structures and scores alternative futures hypotheses for complicated intelligence problems with many factors. The process enumerates, then compares the relative likelihood of COAs for all actors (e.g., military or national leaders) and their possible outcomes. The method provides a structure to consider all COAs while attempting to minimize the exponential growth of hypotheses.

A basic problem-solving process flow (Figure 4.7), which encompasses the essence of each of these three approaches, includes five fundamental component stages:

  1. Problem assessment. The problem must be clearly defined, and criteria for decision making must be established at the beginning. The problem, as well as boundary conditions, constraints, and the format of the desired solution, is articulated.
  2. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  3. Alternative analysis. In concert with problem decomposition, alterna- tive solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are catego- rized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and develop- ment, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor ana- lyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and
  4. Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
  5. Alternative analysis. In concert with problem decomposition, alternative solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are categorized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and development, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor analyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and development investment to significant but hidden investment in a new, breakthrough product development. Each solution (or explanation, in this case) must be compared to the model, and this process may cause the scope of the model to be expanded in scope, refined, and further decomposed to smaller components.
  6. Decision analysis. In this stage the alternative solutions are applied to the model of the situation to determine the consequences of each solution. In the foreign firm example, consequences are related to both the likelihood of the hypothesis being true and the consequences of actions taken. The decision factors, defined in the first stage, are applied to evaluate the performance, effectiveness, cost, and risk associated with each solution. This stage also reveals the sensitivity of the decision factors to the situation model (and its uncertainties) and may send the analyst back to gather more information about the situation to refine the model [42].
  7. Solution evaluation. The final stage, judgment, compares the outcome of decision analysis with the decision criteria established at the onset. Here, the uncertainties (about the problem, the model of the situation, and the effects of the alternative solutions) are considered and other subjective (tacit) factors are weighed to arrive at a solution decision.

This approach underlies the basis for traditional analytic intelligence methods, because it provides structure, rationale, and formality. But most recognize that the solid tacit knowledge of an experienced analyst provides a complementary basis—or an unspoken confidence that underlies final decisions—that is recognized but not articulated as explicitly as the quantified decision data.

4.4.2 Systems Thinking

In contrast with the reductionism of a purely analytic approach, a more holistic approach to understanding complex processes acknowledges the inability to fully decompose many complex problems into a finite and complete set of linear processes and relationships. This approach, referred to as holism, seeks to understand high-level patterns of behavior in dynamic or complex adaptive systems that transcend complete decomposition (e.g., weather, social organizations, or large-scale economies and ecologies). Rather than being analytic, systems approaches tend to syn- thetic—that is, these approaches construct explanations at the aggregate or large scale and compare them to real-world systems under study.

Complexity refers the property of real-world systems that prohibits any formalism to represent or completely describe its behavior. In contrast with simple systems that may be fully described by some formalism (i.e., mathematical equations that fully describe a real-world process to some level of satisfaction for the problem at hand), complex systems lack a fully descriptive formalism that captures all of their properties, especially global behavior.

systems of subatomic scale, human organizational systems, and large-scale economies, where very large numbers of independent causes interact in large numbers of interactive ways, are characterized by inability to model global behavior—and a frustrating inability to predict future behavior.

The expert’s judgment is based not on an external and explicit decomposition of the problem, but on an internal matching of high-level patterns of prior experience with the current situation. The experienced detective as well as the experienced analyst applies such high-level comparisons of current behaviors with previous tacit (unarticulated, even unconscious) patterns gained through experience.

It is important to recognize that analytic and systems-thinking approaches, though in contrast, are usually applied in a complementary fashion by individuals and team alike. The analytic approach provides the structure, record keeping, and method for articulating decision rationale, while the systems approach guides the framing of the problem, provides the synoptic perspective for exploring alternatives, and provides confidence in judgments.

4.4.3     Naturalistic Decision Making

in times of crisis, when time does not permit the careful methodologies, humans apply more naturalistic methods that, like the systems-thinking mode, rely entirely on the only basis available—prior experience.

Uncontrolled, [information] will control you and your staffs … and lengthen your decision-cycle times.” (Insightfully, the Admiral also noted, “You can only manage from your Desktop Computer … you cannot lead from it”

While long-term intelligence analysis applies the systematic, critical analytic approaches described earlier, crisis intelligence analy- sis may be forced to the more naturalistic methods, where tacit experience (via informal on-the-job learning, simulation, or formal learning) and confidence are critical.

4.5 Tradecraft: The Best Practices of Intelligence

The capture and sharing of best practices was developed and matured through- out the 1980s when the total quality movement institutionalized the processes of benchmarking and recording lessons learned. Two forms of best practices and lessons capture and recording are often cited:

  1. Explicit process descriptions. The most direct approach is to model and describe the best collection, analytic, and distribution processes, their performance properties, and applications. These may be indexed, linked, and organized for subsequent reuse by a team posed with simi- lar problems and instructors preparing formal curricula.
  2. Tacit learning histories. The methods of storytelling, described earlier in this chapter, are also applied to develop a “jointly told” story by the team developing the best practice. Once formulated, such learning histories provide powerful tools for oral, interactive exchanges within the organization; the written form of the exchanges may be linked to the best-practice description to provide context.

While explicit best-practices databases explain the how, learning histories provide the context to explain the why of particular processes.

The CIA maintains a product evaluation staff to evaluate intelligence products, learn from the large range of products produced (estimates, forecasts, technical assessments, threat assessments, and warnings) and maintains the database of best practices for training and distribution to the analytic staff.

4.6 Summary

In this chapter, we have introduced the fundamental cultural qualities, in terms of virtues and disciplines that characterize the knowledge-based intelligence organization. The emphasis has necessarily been on organizational disciplines—learning, collaborating, problem solving—that provide the agility to deliver accurate and timely intelligence products in a changing environment. The virtues and disciplines require support—technology to support collaboration over time and space, to support the capture and retrieval of explicit knowledge, to enable the exchange of tacit knowledge, and to support the cognitive processes in analytic and holistic problem solving.

5

Principles of Intelligence Analysis and Synthesis

At the core of all knowledge creation are the seemingly mysterious reasoning processes that proceed from the known to the assertion of entirely new knowledge about the previously unknown. For the intelligence analyst, this is the process by which evidence [1], that data deter- mined to be relevant to a problem, is used to infer knowledge about a subject of investigation—the intelligence target. The process must deal with evidence that is often inadequate, undersampled in time, ambiguous, and carries questionable pedigree.

We refer to this knowledge-creating discipline as intelligence analysis and the practitioner as analyst. But analysis properly includes both the processes of analysis (breaking things down) and synthesis (building things up).

5.1 The Basis of Analysis and Synthesis

The process known as intelligence analysis employs both the functions of analysis and synthesis to produce intelligence products.

In a criminal investigation, this leads from a body of evidence, through feasible explanations, to an assembled case. In intelligence, the process leads from intelligence data, through alternative hypotheses, to an intelligence product. Along this trajectory, the problem solver moves forward and backward iteratively seeking a path that connects the known to the solution (that which was previously unknown).

Intelligence analysis-synthesis is very interested in financial, political, economic, military, and many other evidential relationships that may not be causal, but provide understanding of the structure and behavior of human, organizational, physical, and financial entities.

Descriptions of the analysis-synthesis processes can be traced from its roots in philosophy and problem solving to applications in intelligence assessments.

Philosophers distinguish between propositions as analytic or synthetic based on the direction in which they are developed. Propositions in which the predicate (conclusion) is contained within the subject are called analytic because the predicate can be derived directly by logical reasoning forward from the subject; the subject is said to contain the solution. Synthetic propositions on the other hand have predicates and subjects that are independent. The synthetic proposition affirms a connection between otherwise independent concepts.

The empirical scientific method applies analysis and synthesis to develop and then to test hypotheses:

  • Observation. A phenomenon is observed and recorded as data.
  • Hypothesis creation. Based upon a thorough study of the data, a working hypothesis is created (by the inductive analysis process or by pure inspi- ration) to explain the observed phenomena.
  • Experiment development. Based on the assumed hypothesis, the expected results (the consequences) of a test of the hypothesis are synthesized (by deduction).
  • Hypothesis testing. The experiment is performed to test the hypothesis against the data.
  • When the consequences of the test are confirmed, the hypothesis is verified (as a theory or law depending upon the degree of certainty).

The analyst iteratively applies analysis and synthesis to move forward from evidence and backward from hypothesis to explain the available data (evidence). In the process, the analyst identifies more data to be collected, critical missing data, and new hypotheses to be explored. This iterative analysis-synthesis process provides the necessary traceability from evidence to conclusion that will allow the results (and the rationale) to be explained with clarity and depth when completed.

 

5.2 The Reasoning Processes

Reasoning processes that analyze evidence and synthesize explanations perform inference (i.e., they create, manipulate, evaluate, modify, and assert belief). We can characterize the most fundamental inference processes by their process and products:

  • Process. The direction of the inference process refers to the way in which beliefs are asserted. The process may move from specific (or particular) beliefs toward more general beliefs, or from general beliefs to assert more specific beliefs.
  • Products. The certainty associated with an inference distinguishes two categories of results of inference. The asserted beliefs that result from inference may be infallible (e.g., an analytic conclusion is derived from infallible beliefs and infallible logic is certain) or fallible judgments (e.g., a synthesized judgment is asserted with a measure of uncertainty; “probably true,” “true with 0.95 probability,” or “more likely true than false”).

 

5.2.1 Deductive Reasoning

Deduction is the method of inference by which a conclusion is inferred by applying the rules of a logical system to manipulate statements of belief to form new logically consistent statements of belief. This form of inference is infallible, in that the conclusion (belief) must be as certain as the premise (belief). It is belief preserving in that conclusions reveal no more than that expressed in the original premises. Deduction can be expressed in a variety of syllogisms, including the more common forms of propositional logic.

5.2.2 Inductive Reasoning

Induction is the method of inference by which a more general or more abstract belief is developed by observing a limited set of observations or instances.

Induction moves from specific beliefs about instances to general beliefs about larger and future populations of instances. It is a fallible means of inference.

The form of induction most commonly applied to extend belief from a sample of instances to a larger population, is inductive generalization:

By this method, analysts extend the observations about a limited number of targets (e.g., observations of the money laundering tactics of several narcotics rings within a drug cartel) to a larger target population (e.g., the entire drug cartel).

Inductive prediction extends belief from a population to a specific future sample.

By this method, an analyst may use several observations of behavior (e.g., the repeated surveillance behavior of a foreign intelligence unit) to create a general detection template to be used to detect future surveillance activities by that or other such units. The induction presumes future behavior will follow past patterns.

In addition to these forms, induction can provide a means of analogical reasoning (induction on the basis of analogy or similarity) and inference to relate cause and effect. The basic scientific method applies the principles of induction to develop hypotheses and theories that can subsequently be tested by experimentation over a larger population or over future periods of time. The subject of induction is central to the challenge of developing automated systems that generalize and learn by inducing patterns and processes (rules).

Koestler uses the term bisociation to describe the process of viewing multiple explanations (or multiple associations) of the same data simultaneously. In the example in the figure, the data can be projected onto a common plane of discernment in which the data represents a simple curved line; projected onto an orthogonal plane, the data can explain a sinusoid. Though undersampled, as much intelligence data is, the sinusoid represents a new and novel explanation that may remain hidden if the analyst does not explore more than the common, immediate, or simple interpretation.

In a similar sense, the inductive discovery by an intelligence analyst (aha!) may take on many different forms, following the simple geometric metaphor. For example:

  • A subtle and unique correlation between the timing of communications (by traffic analysis) and money transfers of a trading firm may lead to the discovery of an organized crime operation.
  • A single anomalous measurement may reveal a pattern of denial and deception to cover the true activities at a manufacturing facility in which many points of evidence, are, in fact deceptive data “fed” by the deceiver. Only a single piece of anomalous evidence (D5 in the figure) is the clue that reveals the existence of the true operations (a new plane in the figure). The discovery of this new plane will cause the analyst to search for additional supporting evidence to support the deception hypothesis.

Each frame of discernment (or plane in Koestler’s metaphor) is a framework for creating a single or a family of multiple hypotheses to explain the evidence. The creative analyst is able to entertain multiple frames of discernment, alternatively analyzing possible “fits” and constructing new explanations, exploring the many alternative explanations. This is Koestler’s constructive-destructive process of discovery.

Collaborative intelligence analysis (like collaborative scientific discovery) may produce a healthy environment for creative induction or an unhealthy competitive environment that stifles induction and objectivity. The goal of collaborative analysis is to allow alternative hypotheses to be conceived and objectively evaluated against the available evidence and to guide the tasking for evidence to confirm or disconfirm the alternatives.

5.2.3 Abductive Reasoning

Abduction is the informal or pragmatic mode of reasoning to describe how we “reason to the best explanation” in everyday life. Abduction is the practical description of the interactive use of analysis and synthesis to arrive at a solution or explanation creating and evaluating multiple hypotheses.

Unlike infallible deduction, abduction is fallible because it is subject to errors (there may be other hypotheses not considered or another hypothesis, however unlikely, may be correct). But unlike deduction, it has the ability to extend belief beyond the original premises. Peirce contended that this is the logic of discovery and is a formal model of the process that scientists apply all the time.

Consider a simple intelligence example that implements the basic abduc- tive syllogism. Data has been collected on a foreign trading company, TraderCo, which indicates its reported financial performance is not consistent with (less than) its level of operations. In addition, a number of its executives have subtle ties with organized crime figures.

The operations of the company can be explained by at least three hypotheses:

Hypothesis (H1)—TraderCo is a legitimate but poorly run business; its board is unaware of a few executives with unhealthy business contacts.

Hypothesis (H2)—TraderCo is a legitimate business with a naïve board that is unaware that several executives who gamble are using the business to pay off gambling debts to organized crime.

Hypothesis (H3)—TraderCo is an organized crime front operation that is trading in stolen goods and laundering money through the business, which reports a loss.

Hypothesis H3 best explains the evidence.

∴ Therefore, Accept Hypothesis H3 as the best explanation.

Of course, the critical stage of abduction unexplained in this set of hypotheses is the judgment that H3 is the best explanation. The process requires a criteria for ranking hypotheses, a method for judging which is best, and a method to assure that the set of candidate hypotheses cover all possible (or feasible) explanations.

 

5.2.3.1 Creating and Testing Hypotheses

Abduction introduces the competition among multiple hypotheses, each being an attempt to explain the evidence available. These alternative hypotheses can be compared, or competed on the basis of how well they explain (or fit) the evidence. Furthermore, the created alternative hypotheses provide a means of identifying three categories of evidence important to explanation:

  • Positive evidence. This is evidence revealing the presence of an object or occurrence of an event in a hypothesis.
  • Missing evidence. Some hypotheses may fit the available evidence, but the hypothesis “predicts” that additional evidence that should exist if the hypothesis were true is “missing.” Subsequent searches and testing for this evidence may confirm or disconfirm the hypothesis.
  • Negative evidence. Hypotheses that contain evidence of a nonoccurrence of an event (or nonexistence of an object) may confirm a hypothesis.

5.2.3.2 Hypothesis Selection

Abduction also poses the issue of defining which hypothesis provides the best explanation of the evidence. The criteria for comparing hypotheses, at the most fundamental level, can be based on two principle approaches established by philosophers for evaluating truth propositions about objective reality [18]. The correspondence theory of the truth of a proposition p is true is to maintain that “p corresponds to the facts.”

For the intelligence analyst this would equate to “hypothesis h corresponds to the evidence”—it explains all of the pieces of evidence, with no expected evidence missing, all without having to leave out any contradictory evidence. The coherence theory of truth says that a proposition’s truth consists of its fitting into a coherent system of propositions that create the hypothesis. Both concepts contribute to practical criteria for evaluating competing hypotheses

5.3 The Integrated Reasoning Process

The analysis-synthesis process combines each of the fundamental modes of reasoning to accumulate, explore, decompose to fundamental elements, and then fit together evidence. The process also creates hypothesized explanations of the evidence and uses these hypotheses to search for more confirming or refuting elements of evidence to affirm or prune the hypotheses, respectively.

This process of proceeding from an evidentiary pool to detections, explanations, or discovery has been called evidence marshaling because the process seeks to marshal (assemble and organize) into a representation (a model) that:

  • Detects the presence of evidence that match previously known premises (or patterns of data);
  • Explains underlying processes that gave rise to the evidence;
  • Discovers new patterns in the evidence—patterns of circumstances or behaviors not known before (learning).

The figure illustrates four basic paths that can proceed from the pool of evidence, our three fundamental inference modes and a fourth feedback path:

  1. Deduction. The path of deduction tests the evidence in the pool against previously known patterns (or templates) that represent hypotheses of activities that we seek to detect. When the evidence fits the hypothesis template, we declare a match. When the evidence fits multiple hypotheses simultaneously, the likelihood of each hypothesis (determined by the strength of evidence for each) is assessed and reported. (This likelihood may be computed probabilistically using Bayesian methods, where evidence uncertainty is quantified as a probability and prior probabilities of the hypotheses are known.)
  2. Retroduction. This feedback path, recognized and named by C.S. Peirce as yet another process of reasoning, occurs when the analyst conjectures (synthesizes) a new conceptual hypothesis (beyond the cur- rent frame of discernment) that causes a return to the evidence to seek evidence to match (or test) this new hypothesis. The insight Peirce provided is that in the testing of hypotheses, we are often inspired to realize new, different hypotheses that might also be tested. In the early implementation of reasoning systems, the forward path of deduction was often referred to as forward chaining by attempting to automatically fit data to previously stored hypothesis templates; the path of retroduction was referred to as backward chaining, where the system searched for data to match hypotheses queried by an inspired human operator.
  3. Abduction. The abduction process, like induction, creates explanatory hypotheses inspired by the pool evidence and then, like deduction, attempts to fit items of evidence to each hypothesis to seek the best explanation. In this process, the candidate hypotheses are refined and new hypotheses are conjectured. The process leads to comparison and ranking of the hypotheses, and ultimately the best is chosen as the explanation. As a part of the abductive process, the analyst returns to the pool of evidence to seek support for these candidate explanations; this return path is called retroduction.
  4. Induction. The path of induction considers the entire pool of evidence to seek general statements (hypotheses) about the evidence. Not seeking point matches to the small sets of evidence, the inductive path conjectures new and generalized explanation of clusters of similar evidence; these generalizations may be tested across the evidence to determine the breadth of applicability before being declared as a new discovery.

5.4 Analysis and Synthesis As a Modeling Process

The fundamental reasoning processes are applied to a variety of practical ana- lytic activities performed by the analyst.

  • Explanation and description. Find and link related data to explain entities and events in the real world.
  • Detection. Detect and identify the presence of entities and events based on known signatures. Detect potentially important deviations, including anomaly detection of changes relative to “normal” or “expected” state or change detection of changes or trends over time.
  • Discovery. Detect the presence of previously unknown patterns in data (signatures) that relate to entities and events.
  • Estimation. Estimate the current qualitative or quantitative state of an entity or event.
  • Prediction. Anticipate future events based on detection of known indicators; extrapolate current state forward, project the effects of linear fac- tors forward, or simulate the effects of complex factors to synthesize possible future scenarios to reveal anticipated and unanticipated (emergent) futures.

In each of these cases, we can view the analysis-synthesis process as an evidence-decomposing and model-building process.

The objective of this process is to sort through and organize data (analyze) and then to assemble (synthesize), or marshal related evidence to create a hypothesis—an instantiated model that represents one feasible representation of the intelligence subject (target). The model is used to marshal evidence, evaluate logical argumentation, and provide a tool for explanation of how the available evidence best fits the analyst’s conclusion. The model also serves to help the analyst understand what evidence is missing, what strong evidence supports the model, and where negative evidence might be expected. The terminology we use here can be clarified by the following distinctions:

  • A real intelligence target is abstracted and represented by models.
  • A model has descriptive and stated attributes or properties.
  • A particular instance of a model, populated with evidence-derived and conjectured properties, is a hypothesis.

A target may be described by multiple models, each with multiple instances (hypotheses). For example, if our target is the financial condition of a designated company, we might represent the financial condition with a single financial model in the form of a spreadsheet that enumerates many financial attributes. As data is collected, the model is populated with data elements, some reported publicly and others estimated. We might maintain three instances of the model (legitimate company, faltering legitimate company, and illicit front organization), each being a competing explanation (or hypothesis) of the incomplete evidence. These hypotheses help guide the analyst to identify the data required to refine, affirm, or discard existing hypotheses or to create new hypotheses.

Explicit model representations provide a tool for collaborative construction, marshaling of evidence, decomposition, and critical examination. Mental and explicit modeling are complementary tools of the analyst; judgment must be applied to balance the use of both.

Former U.S. National Intelligence Officer for Warning (1994–1996) Mary McCarthy has emphasized the importance of the explicit modeling to analysis:

Rigorous analysis helps overcome mindset, keeps analysts who are immersed in a mountain of new information from raising the bar on what they would consider an alarming threat situation, and allows their minds to expand other possibilities. Keeping chronologies, maintaining databases and arraying data are not fun or glamorous. These techniques are the heavy lifting of analysis, but this is what analysts are supposed to do [19].

 

The model is an abstract representation that serves two functions:

  1. Model as hypothesis. Based on partial data or conjecture alone, a model may be instantiated as a feasible proposition to be assessed, a hypothesis. In a homicide investigation, each conjecture for “who did it” is a hypothesis, and the associated model instance is a feasible explanation for “how they did it.” The model provides a framework around which data is assembled, a mechanism for examining feasibility, and a basis for exploring data to confirm or refute the hypothesis.
  2. Model as explanation. As evidence (relevant data that fits into the model) is assembled on the general model framework to form a hypothesis, different views of the model provide more robust explanations of that hypothesis. Narrative (story), timeline, organization relationships, resources, and other views may be derived from a common model.

 

 

The process of implementing data decomposition (analysis) and model construction-examination (synthesis) can be depicted in three process phases or spaces of operation (Figure 5.6):

  1. Data space. In this space, data (relevant and irrelevant, certain and ambiguous) are indexed and accumulated. Indexing by time (of collection and arrival), source, content topic, and other factors is performed to allow subsequent search and access across many dimensions.
  2. Argumentation space. The data is reviewed; selected elements of potentially relevant data (evidence) are correlated, grouped, and assembled into feasible categories of explanations, forming a set (structure) of high-level hypotheses to explain the observed data. This process applies exhaustive searches of the data space, accepting some as relevant and discarding others. In this phase, patterns in the data are dis- covered, although all the data in the patterns may not be present; these patterns lead to the creation of hypotheses even though all the data may not exist. Examination of the data may lead to creation of hypotheses by conjecture, even though no data supports the hypothesis at this point. The hypotheses are examined to determine what data would be required to reinforce or reject each; hypotheses are ranked in terms of likelihood and needed data (to reinforce or refute). The models are tested and various excursions are examined. This space is the court in which the case is made for each hypothesis, and they are judged for completeness, sufficiency, and feasibility. This examination can lead to requests for additional data, refinements of the current hypotheses, and creation of new hypotheses.
  3. Explanation space. Different “views” of the hypothesis model provide explanations that articulate the hypothesis and relate the supporting evidence. The intelligence report can include a single model and explanation that best fits the data (when data is adequate to assert the single answer) or alternative competing models, as well as the sup- porting evidence for each and an assessment of the implications of each. Figure 5.6 illustrates several of the views often used: timelines of events, organization-relationship diagrams, annotated maps and imagery, and narrative story lines.

For a single target under investigation, we may create and consider (or entertain) several candidate hypotheses, each with a complete set of model views. If, for example, we are trying to determine the true operations of the foreign company introduced earlier, TradeCo, we may hold several hypotheses:

  1. H1—The company is a legal clothing distributor, as advertised.
  2. H2 —The company is a legal clothing distributor, but company executives are diverting business funds for personal interests.
  3. H3—The company is a front operation to cover organized crime, where hypothesis 3 has two sub-hypotheses:
  • H31—The company is a front for drug trafficking.
    • H32—The company is a front for terrorism money laundering.

In this case, H1, H2, H31, and H32 are the four root hypotheses, and the analyst identifies the need to create an organizational model, an operations flow-process model, and a financial model for each of the four hypotheses—creating 4 × 3 = 12 models.

 

5.5 Intelligence Targets in Three Domains

We have noted that intelligence targets may be objects, events, or dynamic processes—or combinations of these. The development of information operations has brought a greater emphasis on intelligence targets that exist not only in the physical domain, but in the realms of information (e.g., networked computers and information processes) and human decision making.

Information operations (IO) are those actions taken to affect an adversary’s information and information systems, while defending one’s own information and information systems. The U.S. Joint Vision 2020 describes the Joint Chiefs of Staff view of the ultimate purpose of IO as “to facilitate and protect U.S. decision-making processes, and in a conflict, degrade those of an adversary”.

The JV2020 builds on the earlier JV2010 [26] and retains the fundamental operational concepts, two with significant refinements that emphasize IO. The first is the expansion of the vision to encompass the full range of operations (nontraditional, asymmetric, unconventional ops), while retaining warfighting as the primary focus. The second refinement moves information superiority concepts beyond technology solutions that deliver information to the concept of superiority in decision making. This means that IO will deliver increased information at all levels and increased choices for commanders. Conversely, it will also reduce information to adversary commanders and diminish their decision options. Core to these concepts and challenges is the notion that IO uniquely requires the coordination of intelligence, targeting, and security in three fundamental realms, or domains of human activities.

 

These are likewise the three fundamental domains of intelligence targets, and each must be modeled:

  1. The physical domain encompasses the material world of mass and energy. Military facilities, vehicles, aircraft, and personnel make up the principal target objects of this domain. The orders of battle that measure military strength, for example, are determined by enumerating objects of the physical world.
  2. The abstract symbolic domain is the realm of information. Words, numbers, and graphics all encode and represent the physical world, storing and transmitting it in electronic formats, such as radio and TV signals, the Internet, and newsprint. This is the domain that is expanding at unprecedented rates, as global ideas, communications, and descriptions of the world are being represented in this domain. The domain includes the cyberspace that has become the principal means by which humans shape their perception of the world. It interfaces the physical to the cognitive domains.
  3. The cognitive domain is the realm of human thought. This is the ultimate locus of all information flows. The individual and collective thoughts of government leaders and populations at large form this realm. Perceptions, conceptions, mental models, and decisions are formed in this cognitive realm. This is the ultimate target of our adversaries: the realm where uncertainties, fears, panic, and terror can coerce and influence our behavior.

Current IO concepts have appropriately emphasized the targeting of the second domain—especially electronic information systems and their information content. The expansion of networked information systems and the reliance on those systems has focused attention on network-centric forms of warfare. Ultimately, though, IO must move toward a focus on the full integration of the cognitive realm with the physical and symbolic realms to target the human mind

Intelligence must understand and model the complete system or complex of the targets of IO: the interrelated systems of physical behavior, information perceived and exchanged, and the perception and mental states of decision makers.

Of importance to the intelligence analyst is the clear recognition that most intelligence targets exist in all three domains, and models must consider all three aspects.

The intelligence model of such an organization must include linked models of all three domains—to provide an understanding of how the organization perceives, decides, and communicates through a networked organization, as well as where the people and other physical objects are moving in the physical world. The concepts of detection, identification, and dynamic tracking of intelligence targets apply to objects, events, and processes in all three domains.

5.6 Summary

the analysis-synthesis process proceeds from intelligence analysis to operations analysis and then to policy analysis.

The knowledge-based intelligence enterprise requires the capture and explicit representation of such models to permit collaboration among these three disciplines to achieve the greatest effectiveness and sharing of intellectual capital.

6

The Practice of Intelligence Analysis and Synthesis

The chapter moves from high-level functional flow models toward the processes implemented by analysts.

A practical description of the process by one author summarizes the perspective of the intelligence user:

A typical intelligence production consists of all or part of three main elements: descriptions of the situation or event with an eye to identifying its essential characteristics; explanation of the causes of a development as well as its significance and implications; and the prediction of future developments. Each element contains one or both of these components: data, pro- vided by knowledge and incoming information and assessment, or judgment, which attempts to fill the gaps in the data

Consumers expect description, explanation, and prediction; as we saw in the last chapter, the process that delivers such intelligence is based on evidence (data), assessment (analysis-synthesis), and judgment (decision).

6.1 Intelligence Consumer Expectations

The U.S. Government Accounting Office (GAO) noted the need for greater clarity in the intelligence delivered in U.S. national intelligence estimates (NIEs) in a 1996 report, enumerating five specific standards for analysis, from the perspective of policymakers.

Based on a synthesis of the published views of current and former senior intelligence officials, the reports of three independent commissions, and a CIA publication that addressed the issue of national intelligence estimating, an objective NIE should meet the following standards [2]:

  • [G1]: quantify the certainty level of its key judgments by using percentages or bettors’ odds, where feasible, and avoid overstating the certainty of judgments (note: bettors’ odds state the chance as, for example, “one out of three”);
  • [G2]: identify explicitly its assumptions and judgments;
  • [G3]: develop and explore alternative futures: less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred;
  • [G4]: allow dissenting views on predictions or interpretations;
  • [G5]: note explicitly what the IC does not know when the information gaps could have significant consequences for the issues under consideration.

 

The Commission would urge that the [IC] adopt as a standard of its meth- odology that in addition to considering what they know, analysts consider as well what they know they don’t know about a program and set about fill- ing gaps in their knowledge by:

  • [R1] taking into account not only the output measures of a program, but the input measures of technology, expertise and personnel from both internal sources and as a result of foreign assistance. The type and rate of foreign assis- tance can be a key indicator of both the pace and objective of a program into which the IC otherwise has little insight.
  • [R2] comparing what takes place in one country with what is taking place in others, particularly among the emerging ballistic missile powers. While each may be pursuing a somewhat different development program, all of them are pursuing programs fundamentally different from those pursued by the US, Russia and even China. A more systematic use of comparative methodologies might help to fill the information gaps.
  • [R3] employing the technique of alternative hypotheses. This technique can help make sense of known events and serve as a way to identify and organize indicators relative to a program’s motivation, purpose, pace and direction. By hypothesizing alternative scenarios a more adequate set of indicators and col- lection priorities can be established. As the indicators begin to align with the known facts, the importance of the information gaps is reduced and the likely outcomes projected with greater confidence. The result is the possibility for earlier warning than if analysts wait for proof of a capability in the form of hard evidence of a test or a deployment. Hypothesis testing can provide a guide to what characteristics to pursue, and a cue to collection sensors as well.
  • [R4] explicitly tasking collection assets to gather information that would dis- prove a hypothesis or fill a particular gap in a list of indicators. This can prove a wasteful use of scarce assets if not done in a rigorous fashion. But moving from the highly ambiguous absence of evidence to the collection of specific evidence of absence can be as important as finding the actual evidence [3].

 

 

 

intelligence consumers want more than estimates or judgments; they expect concise explanations of the evidence and reasoning processes behind judgments with substantiation that multiple perspectives, hypotheses, and consequences have been objectively considered.

They expect a depth of analysis-synthesis that explicitly distinguishes assumptions, evidence, alternatives, and consequences—with a means of quantifying each contribution to the outcomes (judgments).

6.2 Analysis-Synthesis in the Intelligence Workflow

Analysis-synthesis is one process within the intelligence cycle… It represents a process that is practically implemented as a continuum rather than a cycle, with all phases being implemented concurrently and addressing a multitude of different intelligence problems or targets.

The stimulus-hypothesis-option-response (SHOR) model, described by Joseph Wohl in 1986, emphasizes the consideration of multiple perception hypotheses to explain sensed data and assess options for response.

The observe-orient-decide-act (OODA) loop, developed by Col. John Warden, is a high-level abstraction of the military command and control loop that considers the human decision-making role and its dependence on observation and orientation—the process of placing the observations in perceptual framework for decision making.

The tasking, processing, exploitation, dissemination (TPED) model used by U.S. technical collectors and processors [e.g., the U.S. National Reconnaissance Office (NRO), the National Imagery and Mapping Agency (NIMA), and the National Security Agency (NSA)] distinguishes between the processing elements of the national technical-means intelligence channels (SIGINT, IMINT, and MASINT) and the all-source analytic exploitation roles of the CIA and DIA.

The DoD Joint Directors of Laboratories (JDL) data fusion model is a more detailed technical model that considers the use of multiple sources to produce a common operating picture of individual objects, situations (the aggregate of objects and their behaviors), and the consequences or impact of those situations. The model includes a hierarchy of data correlation and combination processes at three levels (level 0: signal refinement; level 1: object refinement; level 2: situation refinement; level 3: impact refinement) and a corresponding feedback control process (level 4: process refinement) [10]. The JDL model is a functional representation that accommodates automated processes and human processes and provides detail within both the processing and analysis steps. The model is well suited to organize the structure of automated processing stages for technical sensors (e.g., imagery, signals, and radar).

  • Level 0: signal refinement automated processing correlates and combines raw signals (e.g., imagery pixels or radar signals intercepted from multiple locations) to detect objects and derive their location, dynamics, or identity.
  • Level 1: object refinement processing detects individual objects and correlates and combines these objects across multiple sources to further refine location, dynamics, or identity information.
  • Level 2: situation refinement analysis correlates and combines the detected objects across all sources within the background context to produce estimates of the situation—explaining the aggregate of static objects and their behaviors in context to derive an explanation of activities with estimated status, plans, and intents.
  • Level 3: impact refinement analysis estimates the consequences of alternative courses of action.
  • The level 4 process refinement flows are not shown in the figure, though all forward processing levels can provide inputs to refine the process to: focus collection or processing on high-value targets, refine processing parameters to filter unwanted content, adjust database indexing of intermediate data, or improve overall efficiency of the production process. The level 4 process effectively performs the KM business intelligence functions introduced in Section 3.7.

The analysis stage employs semiautomated detection and discovery tools to access the data in large databases produced by the processing stage. In general, the processing stage can be viewed as a factory of processors, while the analysis stage is a lower volume shop staffed by craftsmen—the analytic team.

6.3 Applying Automation

Automated processing has been widely applied to level 1 object detection (e.g., statistical pattern recognition) and to a lesser degree to level 2 situation recognition problems (e.g., symbolic artificial intelligence systems) for intelligence applications.

Viewing these dimensions as the number of nodes (causes) and number of interactions (influencing the scale of effects) in a dynamic system, the problem space depicts the complexity of the situation being analyzed:

  • Causal diversity. The first dimension relates to the number of causal fac- tors, or actors, that influence the situation behavior.
  • Scale of effects. The second dimension relates to the degree of interaction between actors, or the degree to which causal factors influence the behavior of the situation.

As both dimensions increase, the potential for nonlinear behavior increases, making it more difficult to model the situation being analyzed.

These problems include the detection of straightforward objects in images, content patterns in text, and emitted signal matching. More difficult problems still in this category include dynamic situations with moderately higher numbers of actors and scales of effects that require qualitative (propositional logic) or quantitative (statistical modeling) reasoning processes.

The most difficult category 3 problems, intractable to fully automated analysis, are those complex situations characterized by high numbers of actors with large-scale interactions that give rise to emergent behaviors.

6.4 The Role of the Human Analyst

The analyst applies tacit knowledge to search through explicit information to create tacit knowledge in the form of mental models and explicit intelligence reports for consumers.

The analysis process requires the analyst to integrate the cognitive reasoning and more emotional sensemaking processes with large bodies of explicit information to produce explicit intelligence products for consumers. To effectively train and equip analysts to perform this process, we must recognize and account for these cognitive and emotion components of comprehension. The complete process includes the automated workflow, which processes explicit information, and the analyst’s internal mental workflow, which integrates the cognitive and emotional modes

 

Complementary logical and emotional frameworks are based on the current mental model of beliefs and feelings and the new information is compared to these frameworks; differences have the potential for affirming the model (agreement), learning and refining the model (acceptance and model adjustment), or rejecting the new information. Judgment integrates feelings about consequences and values (based on experience) with reasoned alternative consequences and courses of action that construct the meaning of the incoming stimulus. Decision making makes an intellectual-emotional commitment to the impact of the new information on the mental model (acceptance, affirmation, refinement, or rejection).

6.5 Addressing Cognitive Shortcomings

The intelligence analyst is not only confronted with ambiguous information about complex subjects, but is often placed under time pressures and expectations to deliver accurate, complete, and predictive intelligence. Consumer expectations often approach infallibility and omniscience.

In this situation, the analyst must be keenly aware of the vulnerabilities of human cognitive short- comings and take measures to mitigate the consequences of these deficiencies. The natural limitations in cognition (perception, attention span, short- and long-term memory recall, and reasoning capacity) constrain the objectivity of our reasoning processes, producing errors in our analysis.

In “Combatting Mind-Set,” respected analyst Jack Davis has noted that analysts must recognize the subtle influence of mindset, the cumulative mental model that distills analysts’ beliefs about a complex subject and “find[s] strategies that simultaneously harness its impressive energy and limit[s] the potential damage”.

Davis recommends two complementary strategies:

  1. Enhancing mindset. Creating explicit representation of the mind- set—externalizing the mental model—allows broader collaboration, evaluation from multiple perspectives, and discovery of subtle biases.
  2. Ensuring mind-set. Maintaining multiple explicit explanations and projections and opportunity analyses provides insurance against single-point judgments and prepares the analyst to switch to alternatives when discontinuities occur.

Davis has also cautioned analysts to beware the paradox of expertise phenomenon that can distract attention from the purpose of an analysis. This error occurs when discordant evidence is present and subject experts tend to be distracted and focus on situation analysis (solving the discordance to understand the subject situation) rather than addressing the impact on the analysis of the consequences of the discrepancy. In such cases, the analyst must focus on providing value added by addressing what action alternatives exist for alternatives and their consequences in cost-benefit terms

Heuer emphasized the importance of supporting tools and techniques to overcome natural analytic limitations [20]: “Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.”

6.6 Marshaling Evidence and Structuring Argumentation

Instinctive analysis focuses on a single or limited range of alternatives, moves on a path to satisfy minimum needs (satisficing, or finding an acceptable explanation), and is performed implicitly using tacit mental models. Structured analysis follows the principles of critical thinking introduced in Chapter 4, organizing the problem to consider all reasonable alternatives, systematically and explicitly representing the alternative solutions to comprehensively analyze all factors.

6.6.1 Structuring Hypotheses

6.6.2 Marshaling Evidence and Structuring Arguments

There exist a number of classical approaches to representing hypotheses, marshaling evidence to them, and arguing for their validity. Argumentation structures propositions to move from premises to conclusions. Three perspectives or disciplines of thought have developed the most fundamental approaches to this process.

Each discipline has contributed methods to represent knowledge and to provide a structure for reasoning to infer from data to relevant evidence, through intermediate hypotheses to conclusion. The term knowledge representation refers to the structure used to represent data and show its relevance as evidence, the representation of rules of inference, and the asserted conclusions.

6.6.3 Structured Inferential Argumentation

Philosophers, rhetoricians, and lawyers have long sought accurate means of structuring and then communicating, in natural language, the lines of reasoning, that lead from complicated sets of evidence to conclusions. Lawyers and intelligence analysts alike seek to provide a clear and compelling case for their conclusions, reasoned from a mass of evidence about a complex subject.

We first consider the classical forms of argumentation described as infor- mal logic, whereby the argument connects premises to conclusions. The com- mon forms include:

  1. Multiple premises, when taken together, lead to but one con- clusion. For example: The radar at location A emits at a high pulse repetition frequency (PRF); when it emits at high PRF, it emits on fre- quency (F) → the radar at A is a fire control radar.
  2. Multiple premises independently lead to the same conclu- sion. For example: The radar at A is a fire control radar. Also Location A stores canisters for missiles. → A surface to air missile (SAM) battery must be at location A.
  3. A single premise leads to but one conclusion, for example: A SAM battery is located at A the battery at A → must be linked to a command and control (C2) center.
  4. A single premise can support more than one conclusion. For example: The SAM battery could be controlled by the C2 center at golf, or The SAM battery could be controlled by the C2 center at hotel.

 

These four basic forms may be combined to create complex sets of argu- mentation, as in the simple sequential combination and simplification of these examples:

  • The radar at A emits at a high PRF; when it emits at high PRF, it emits on frequency F, so it must be a fire control radar. Also, location A stores canisters for missiles, so there must be a SAM battery there. The battery at A must be linked to a C2 center. It could be controlled by the C2 centers at golf or at hotel.

The structure of this argument can be depicted as a chain of reasoning or argumentation (Figure 6.7) using the four premise structures in sequence.

Toulmin distinguished six elements of all arguments [24]:

  1. Data (D), at the beginning point of the argument, are the explicit elements of data (relevant data, or evidence) that are observed in the external world.
  1. Claim (C), is the assertion of the argument.
  2. Qualifier (Q), imposes any qualifications on the claim.
  3. Rebuttals (R) are any conditions that may refute the claim.
  4. Warrants (W) are the implicit propositions (rules, principles) that permit inference from data to claim.
  5. Backing (B) are assurances that provide authority and currency to the warrants.

Applying Toulmin’s argumentation scheme requires the analyst to distinguish each of the six elements of argument and to fit them into a standard structure of reasoning—see Figure 6.8(a)—which leads from datum (D) to claim (C). The scheme separates the domain-independent structure from the warrants and backing, which are dependent upon the field in which we are working (e.g., legal cases, logical arguments, or morals).

The general structure, described in natural language then proceeds from datum (D) to claim (I) as follows:

  • The datum (D), supported by the warrant (W), which is founded upon the backing (B), leads directly to the claim (C), qualified to the degree (Q), with the caveat that rebuttal (R) is present.

 

 

Such a structure requires the analyst to identify all of the key components of the argument—and explicitly report if any components are missing (e.g., if rebuttals or contradicting evidence is not existent).

The benefits of this scheme are the potential for the use of automation to aid analysts in the acquisition, examination, and evaluation of natural-language arguments. As an organizing tool, the Toulmin scheme distinguishes data (evidence) from the warrants (the universal premises of logic) and their backing (the basis for those premises).

It must be noted that formal logicians have criticized Toulmin’s scheme due to its lack of logical rigor and ability to address probabilistic arguments. Yet, it has contributed greater insight and formality to developing structured natural-language argumentation.

6.6.4 Inferential Networks

Moving beyond Toulmin’s structure, we must consider the approaches to create network structures to represent complex chains of inferential reasoning.

The use of graph theory to describe complex arguments allows the analyst to represent two crucial aspects of an argument:

  • Argument structure. The directed graph represents evidence (E), events, or intermediate hypotheses inferred by the evidence (i), and the ultimate, or final, hypotheses (H) as graph nodes. The graph is directed because the lines connecting nodes include a single arrow indicating the single direction of inference. The lines move from a source element of evidence (E) through a series of inferences (i1, i2, i3, … in) toward a terminal hypothesis (H). The graph is acyclic because the directions of all arrows move from evidence, through intermediate inferences to hypothesis, but not back again: there are no closed-loop cycles.
  • Force of evidence and propagation. In common terms we refer the force, strength, or weight of evidence to describe the relative degree of contribution of evidence to support an intermediate inference (in), or the ultimate hypothesis (H). The graph structure provides a means of describing supporting and refuting evidence, and, if evidence is quantified (e.g., probabilities, fuzzy variables, or other belief functions), a means of propagating the accumulated weight of evidence in an argument.

Like a vector, evidence includes a direction (toward certain hypotheses) and a magnitude (the inferential force). The basic categories of argument can be structured to describe four basic categories of evidence combination (illustrated in Figure 6.9):

Direct. The most basic serial chain of inference moves from evidence (E) that the event E occurred, to the inference (i1) that E did in fact occur. This inference expresses belief in the evidence (i.e., belief in the veracity and objectivity of human testimony). The chain may go on serially to further inferences because of the belief in E.

Consonance. Multiple items of evidence may be synergistic resulting in one item enhancing the force of another; their joint contribution pro- vides more inferential force than their individual contributions. Two items of evidence may provide collaborative consonance; the figure illustrates the case where ancillary evidence (E2) is favorable to the credibility of the source of evidence (E1), thereby increasing the force of E1. Evidence may also be convergent when E1 and E2 provide evidence of the occurrence of different events, but those events, together, favor a common subsequent inference. The enhancing contribution

(i1) to (i2) is indicated by the dashed arrow.

Redundant. Multiple items of evidence (E1, E2) that redundantly lead to a common inference (i1) can also diminish the force of each other in two basic ways. Corroborative redundancy occurs when two or more sources supply identical evidence of a common event inference (i1). If one source is perfectly credible, the redundant source does not contribute inferential force; if both have imperfect credibility, one may diminish the force of the other to avoid double counting the force of the redundant evidence. Cumulative redundancy occurs when multiple items of evidence (E1, E2), though inferring different intermediate hypotheses (i1,i2), respectively, lead to a common hypothesis (i3) farther up the reasoning chain. This redundant contribution to (i3), indicated by the dashed arrow, necessarily reduces the contribution of inferential force from E2.

Dissonance. Dissonant evidence may be contradictory when items of evidence E1 and E2 report, mutually exclusively, that the event E did occur and did not occur, respectively. Conflicting evidence, on the other hand, occurs when E1and E2 report two separate events i1 and i2 (both of which may have occurred, but not jointly), but these events favor mutually exclusive hypotheses at i3.

The graph moves from bottom to top in the following sequence:

  1. Direct evidence at the bottom;
  2. Evidence credibility inferences are the first row above evidence, infer- ring the veracity, objectivity, and sensitivity of the source of evidence;
  3. Relevance inferences move from credibility-conditioned evidence through a chain of inferences toward final hypothesis;
  4. The final hypothesis is at the top.

Some may wonder why such rigor is employed for such a simple argument. This relatively simple example illustrates the level of inferential detail required to formally model even the simplest of arguments. It also illustrates the real problem faced by the analyst in dealing with the nuances of redundant and conflicting evidence. Most significantly, the example illustrates the degree of care required to accurately represent arguments to permit machine-automated reasoning about all-source analytic problems.

We can see how this simple model demands the explicit representation of often-hidden assumptions, every item of evidence, the entire sequence of inferences, and the structure of relationships that leads to our conclusion that H1 is true.

Inferential networks provide a logical structure upon which quantified calculations may be performed to compute values of inferential force of evidence and the combined contribution of all evidence toward the final hypothesis.

6.7 Evaluating Competing Hypotheses

Heuer’s research indicated that the single most important technique to over- come cognitive shortcomings is to apply a systematic analytic process that allows objective comparison of alternative hypotheses

“The simultaneous evaluation of multiple, competing hypotheses entails far greater cognitive strain than examining a single, most-likely hypothesis”

Inferential networks are useful at the detail level, where evidence is rich and the ACH approach is useful at the higher levels of abstraction and where evidence is sparse. Networks are valuable for automated computation; ACH is valuable for collaborative analytic reasoning, presentation, and explanation. The ACH approach provides a methodology for the concurrent competition of multiple explanations, rather than the focus on the currently most plausible.

The ACH structure approach described by Heuer uses a matrix to organize and describe the relationship between evidence and alternative hypotheses. The sequence of the analysis-synthesis process (Figure 6.11) includes:

  1. Hypothesis synthesis. A multidisciplinary team of analysts creates a set of feasible hypotheses, derived from imaginative consideration of all possibilities before constructing a complete set that merits detailed consideration.
  2. Evidence analysis. Available data is reviewed to locate relevant evidence and inferences that can be assigned to support or refute the hypotheses. Explicitly identify the assumptions regarding evidence and the arguments of inference. Following the processes described in the last chapter, list the evidence-argument pairs (or chains of inference) and identify, for each, the intrinsic value of its contribution and the potential for being subject to denial or deception (D&D).
  3. Matrix synthesis. Construct an ACH matrix that relates evidence- inference to the hypotheses defined in step 1.
  4. Matrix analysis. Assess the diagnosticity (the significance or diagnostic value of the contribution of each component of evidence and related inferences) of each evidence-inference component to each hypothesis. This process proceeds for each item of evidence-inference across the rows, considering how each item may contribute to each hypothesis. An entry may be supporting (consistent with), refuting (inconsistent with), or irrelevant (not applicable) to a hypothesis; a contribution notation (e.g., +, –, or N/A, respectively) is marked within the cell. Where possible, annotate the likelihood (or probability) that this evi- dence would be observed if the hypothesis is true. Note that the diagnostic significance of an item of evidence is reduced as it is consistent with multiple hypotheses; it has no diagnostic contribution when it supports, to any degree, all hypotheses.
  5. Matrix synthesis (refinement). Evidence assignments are refined, eliminating evidence and inferences that have no diagnostic value.
  6. Hypotheses analysis. The analyst now proceeds to evaluate the likelihood of each hypothesis, by evaluating entries down the columns. The likelihood of each hypothesis is estimated by the characteristics of supporting and refuting evidence (as described in the last chapter). Inconsistencies and gaps in expected evidence provide a basis for retasking; a small but high-confidence item that refutes the preponderance of expected evidence may be a significant indicator of deception. The analyst also assesses the sensitivity of the likely hypothesis to contributing assumptions, evidence, and the inferences; this sensitivity must be reported with conclusions and the consequences if any of these items are in error. This process may lead to retasking of collectors to acquire more data to sup- port or refute hypotheses and to reduce the sensitivity of a conclusion.
  7. Decision synthesis (judgment). Reporting the analytic judgment requires the description of all of the alternatives (not just the most likely), the assumptions, evidence, and inferential chains. The report must also describe the gaps, inconsistencies, and their consequences on judgments. The analyst must also specify what should be done to provide an update on the situation and what indictors might point to significant changes in current judgments.

 

Notice that the ACH approach deliberately focuses the analyst’s attention on the contribution, significance, and relationships of evidence to hypotheses, rather than on building a case for any one hypothesis. The analytic emphasis is, first, on evidence and inference across the rows, before evaluating hypotheses, down the columns.

The stages of the structured analysis-synthesis methodology (Figure 6.12) are summarized in the following list:

  • Organize. A data mining tool (described in Chapter 8, Section 8.2.2) automatically clusters related data sets by identifying linkages (relation- ships) across the different data types. These linked clusters are visualized using link-clustering tools used to visualize clusters and linkages to allow the analyst to consider the meaningfulness of data links and dis- cover potentially relevant relationships in the real world.
  • Conceptualize. The linked data is translated from the abstract relation- ship space to diagrams in the temporal and spatial domains to assess real-world implications of the relationships. These temporal and spatial models allow the analyst to conceptualize alternative explanations that will become working hypotheses. Analysis in the time domain considers the implications of sequence, frequency, and causality, while the spatial domain considers the relative location of entities and events.
  • Hypothesize. The analyst synthesizes hypotheses, structuring evidence and inferences into alternative arguments that can be evaluated using the method of alternative competing hypotheses. In the course of this process, the analyst may return to explore the database and linkage diagrams further to support or refute the working hypotheses.

 

6.8 Countering Denial and Deception

Because the targets of intelligence are usually high-value subjects (e.g., intentions, plans, personnel, weapons or products, facilities, or processes), they are generally protected by some level of secrecy to prevent observation. The means of providing this secrecy generally includes two components:

  1. Denial. Information about the existence, characteristics, or state of a target is denied to the observer by methods of concealment. Camouflage of military vehicles, emission control (EMCON), operational security (OPSEC), and encryption of e-mail messages are common examples of denial, also referred to as dissimulation (hiding the real).
  2. Deception. Deception is the insertion of false information, or simulation (showing the false), with the intent to distort the perception of the observer. The deception can include misdirection (m-type) deception to reduce ambiguity and direct the observer to a simulation—away from the truth—or ambiguity (a-type) deception, which simulates effects to increase the observer’s ambiguity or understanding about the truth

D&D methods are used independently or in concert to distract or disrupt the intelligence analyst, introducing distortions in the collection channels, ambiguity in the analytic process, errors in the resulting intelligence product, and misjudgment in decisions based on the product. Ultimately, this will lead to distrust of the intelligence product by the decision maker or consumer. Strategic D&D poses an increasing threat to the analyst, as an increasing number of channels for D&D are available to deceivers. Six distinct categories of strategic D&D operations have different target audiences, means of implementation, and objectives.

Propaganda or psychological operations (PSYOP) target a general population using several approaches. White propaganda openly acknowledges the source of the information, gray propaganda uses undeclared sources. Black propaganda purports to originate from a source other its actual sponsor, protecting the true source (e.g., clandestine radio and Internet broadcast, independent organizations, or agents of influence. Coordinated white, gray, and black propaganda efforts were strategically conducted by the Soviet Union throughout the Cold War as active measures of disinformation

Leadership deception targets leadership or intelligence consumers, attempting to bypass the intelligence process by appealing directly to the intelligence consumer via other channels. Commercial news channels, untrustworthy diplomatic channels, suborned media, and personal relationships can be exploited to deliver deception messages to leadership (before intelligence can offer D&D cautions) in an effort to establish mindsets in decision makers.

Intelligence deception specifically targets intelligence collectors (technical sensors, communications interceptors, and humans) and subsequently analysts by combining denial of the target data and by introducing false data to disrupt, distract, or deceive the collection or analysis processes (or both processes). The objective is to direct the attention of the sensor or the analyst away from a correct knowledge of a specific target.

Denial operations by means of OPSEC seek to deny access to true intentions and capabilities by minimizing the signatures of entities and activities.

Two primary categories of countermeasures for intelligence deception must be orchestrated to counter either the simple deception of a parlor magician or the complex intelligence deception program of a rogue nation-state. Both collection and analysis measures are required to provide the careful observation and critical thinking necessary to avoid deception. Improvements in collection can provide broader and more accurate coverage, even limited penetration of some covers.

The problem of mitigating intelligence surprise, therefore, must be addressed by considering both large numbers of models or hypotheses (analysis) and large sets of data (collection, storage, and analysis)

In his classic treatise, Strategem, Barton Whaley exhaustively studied over 100 historical D&D efforts and concluded, “Indeed, this is the general finding of my study—that is, the deceiver is almost always successful regardless of the sophistication of his victim in the same art. On the face of it, this seems an intolerable conclusion, one offending common sense. Yet it is the irrefutable conclusion of historical evidence”

 

The components of a rigorous counter D&D methodology, then, include the estimate of the adversary’s D&D plan as an intelligence subject (target) and the analysis of specific D&D hypotheses as alternatives. Incorporating this process within the ACH process described earlier amounts to assuring that reasonable and feasible D&D hypotheses (for which there may be no evidence to induce a hypothesis) are explicitly considered as alternatives.

two active searches for evidence to support, refute, or refine the D&D hypotheses [44]:

  1. Reconstructive inference. This deductive process seeks to detect the presence of spurious signals (Harris call these sprignals) that are indicators of D&D—the faint evidence predicted by conjectured D&D plans. Such sprignals can be strong evidence confirming hypothesis A (the simulation), weak contradictory evidence of hypothesis C (leakage from the adversary’s dissimulation effort), or missing evidence that should be present if hypothesis A were true.
  2. Incongruity testing. This process searches for inconsistencies in the data and inductively generates alternative explanations that attribute the incongruities to D&D (i.e., D&D explains the incongruity of evidence for more than one reality in simultaneous existence).

These processes should be a part of any rigorous alternative hypothesis process, developing evidence for potential D&D hypotheses while refining the estimate of the adversaries’ D&D intents, plans, and capabilities. The processes also focus attention on special collection tasking to support, refute, or refine current D&D hypotheses being entertained.

  • Summary

Central to the intelligence cycle, analysis-synthesis requires the integration of human skills and automation to provide description, explanation, and prediction with explicit and quantified judgments that include alternatives, missing evidence, and dissenting views carefully explained. The challenge of discovering the hidden, forecasting the future, and warning of the unexpected cannot be performed with infallibility, yet expectations remain high for the analytic com- munity.

The practical implementation of collaborative analysis-synthesis requires a range of tools to coordinate the process within the larger intelligence cycle, augment the analytic team with reasoning and sensemaking support, overcome human cognitive shortcomings, and counter adversarial D&D.

 

7

Knowledge Internalization and Externalization

The process of conducting knowledge transactions between humans and computing machines occurs at the intersection between tacit and explicit knowledge, between human reasoning and sensemaking, and the explicit computation of automation. The processes of externalization (tacit-to-explicit transactions) and internalization (explicit-to-tacit transactions) of knowledge, however, are not just interfaces between humans and machines; more properly, the intersection is between human thought, symbolic representations of thought, and the observed world.

7.1 Externalization and Internalization in the Intelligence Workflow

The knowledge-creating spiral described in Chapter 3 introduced the four phases of knowledge creation.

Externalization

Following social interactions with collaborating analysts, an analyst begins to explicitly frame the problem. The process includes the decomposition of the intelligence problem into component parts (as described in Section 2.2) and explicit articulation of essential elements of information required to solve the problem. The tacit-to-explicit transfer includes the explicit listing of these essential elements of information needed, candidate sources of data, the creation of searches for relevant SMEs, and the initiation of queries for relevant knowledge within current holdings and collected all-source data. The primary tools to interact with all-source holdings are query and retrieval tools that search and retrieve information for assessment of relevance by the analyst.

Combination

This explicit-explicit transfer process correlates and combines the collected data in two ways:

  1. Interactive analytic tools. The analyst uses a wide variety of analytic tools to compare and combine data elements to identify relationships and marshal evidence against hypotheses.
  2. Automated data fusion and mining services. Automated data combination services also process high-volume data to bring detections of known patterns and discoveries of “interesting” patterns to the attention of the analyst.

Internalization

The analyst integrates the results of combination in two domains: external hypotheses (explicit models and simulations) and decision models (like the alter- native competing hypothesis decision model introduced in the last chapter) are formed to explicitly structure the rationale between hypotheses, and internally, the analyst develops tacit experience with the structured evidence, hypotheses, and decision alternatives.

Services in the data tier capture incoming data from processing pipelines (e.g., imagery and signals producers), reporting sources (news services, intelligence reporting sources), and open Internet sources being monitored. Content appropriate for immediate processing and production, such as news alerts, indications, and warning events, and critical change data are routed to the operational storage for immediate processing. All data are indexed, transformed, and loaded into the long-term data warehouse or into specialized data stores (e.g., imagery, video, or technical databases). The intelligence services tier includes six basic service categories:

  1. Operational processing. Information filtered for near-real-time criticality are processed to extract and tag content, correlate and combine with related content, and provide updates to operational watch officers. This path applies the automated processes of data fusion and data mining to provide near-real-time indicators, tracks, metrics, and situation summaries.
  2. Indexing, query, and retrieval. Analysts use these services to access the cumulating holdings by both automated subscriptions for topics of interest to be pushed to the user upon receipt and interactive query and retrieval of holdings.
  3. Cognitive (analytic) services. The analysis-synthesis and decision- making processes described in Chapters 5 and 6 are supported by cognitive services (thinking-support tools).
  4. Collaboration services. These services, described in Chapter 4, allow synchronous and asynchronous collaboration between analytic team members.
  5. Digital production services. Analyst-generated and automatically created dynamic products are produced and distributed to consumers based on their specified preferences.
  6. Workflow management. The workflow is managed across all tiers to monitor the flow from data to product, to monitor resource utilization, to assess satisfaction of current priority intelligence requirements, and to manage collaborating workgroups.

7.2 Storage, Query, and Retrieval Services

At the center of the enterprise is the knowledge base, which stores explicit knowledge and provides the means to access that knowledge to create new knowledge.

7.2.1 Data Storage

Intelligence organizations receive a continuous stream of data from their own tasked technical sensors and human sources, as well as from tasked collections of data from open sources. One example might be Web spiders that are tasked to monitor Internet sites for new content (e.g., foreign news services), then to collect, analyze, and index the data for storage. The storage issues posed by the continual collection of high-volume data are numerous:

Diversity. All-source intelligence systems require large numbers of inde- pendent data stores for imagery, text, video, geospatial, and special technical data types. These data types are served by an equally high number of specialized applications (e.g., image and geospatial analysis and signal analysis).

Legacy. Storage system designers are confronted with the integration of existing (legacy) and new storage systems; this requires the integration of diverse logical and physical data types.

Federated retrieval and analysis. The analyst needs retrieval, application, and analysis capabilities that span across the entire storage system.

7.2.2 Information Retrieval

Information retrieval (IR) is formally defined as “… [the] actions, methods and procedures for recovering stored data to provide information on a given subject” [2]. Two approaches to query and retrieve stored data or text are required in most intelligence applications:

  1. Data query and retrieval is performed on structured data stored in relational database applications. Imagery, signals, and MASINT data are generally structured and stored in structured formats that employ structured query language (SQL) and SQL extensions for a wide variety of databases (e.g., Access, IBM DB2 and Informix, Microsoft SQL Server, Oracle, and Sybase). SQL allows the user to retrieve data by context (e.g., by location in data tables, such as date of occurrence) or by content (e.g., retrieve all record with a defined set of values).
  2. Text query and retrieval is performed on both structured and unstructured text in multiple languages by a variety of natural language search engines to locate text containing specific words, phrases, or general concepts within a specified context.

Data query methods are employed within the technical data processing pipelines (IMINT, SIGINT, and MASINT). The results of these analyses are then described by analysts in structured or unstructured text in an analytic database for subsequent retrieval by text query methods.

Moldovan and Harabagiu have defined a five-level taxonomy of Q&A systems (Table 7.1) that range from the common keyword search engine that searches for relevant content (class 1) to reasoning systems that solve complex natural language problems (class 5) [3]. Each level requires increasing scope of knowledge, depth of linguistic understanding, and sophistication of reasoning to translate relevant knowledge to an answer or solution.

 

The first two levels of current search capabilities locate and return relevant content based on keywords (content) or the relationships between clusters of words in the text (concept).

While class 1 capabilities only match and return content that matches the query, class 2 capabilities integrate the relevant data into a simple response to the question.

Class 3 capabilities require the retrieval of relevant knowledge and reasoning about that knowledge to deduce answers to queries, even when the specific answer is not explicitly stated in the knowledge base. This capability requires the ability to both reason from general knowledge to specific answers and provide rationale for those answers to the user.

Class 4 and 5 capabilities represent advanced capabilities, which require robust knowledge bases that contain sophisticated knowledge representation (assertions and axioms) and reasoning (mathematical calculation, logical inference, and temporal reasoning).

7.3 Cognitive (Analytic Tool) Services

Cognitive services support the analyst in the process of interactively analyzing data, synthesizing hypotheses, and making decisions (choosing among alternatives). These interactive services support the analysis-synthesis activities described in Chapters 5 and 6. Alternatively called thinking tools, analytics, knowledge discovery, or analytic tools, these services enable the human to trans- form and view data, create and model hypotheses, and compare alternative hypotheses and consequences of decisions.

  • Exploration tools allow the analyst to interact with raw or processed multi- media (text, numerical data, imagery, video, or audio) to locate and organize content relevant to an intelligence problem. These tools provide the ability to search and navigate large volumes of source data; they also provide automated taxonomies of clustered data and summaries of individual documents. The information retrieval functions described in the last subsection are within this category. The product of exploration is generally a relevant set of data/text organized and metadata tagged for subsequent analysis. The analyst may drill down to detail from the lists and summaries to view the full content of all items identified as relevant.
  • Reasoning tools support the analyst in the process of correlating, comparing, and combining data across all of the relevant sources. These tools support a wide variety of specific intelligence target analyses:
  • Temporal analysis. This is the creation of timelines of events, dynamic relationships, event sequences, and temporal transactions (e.g., electronic, financial, or communication).
  • Link analysis. This involves automated exploration of relationships among large numbers of different types of objects (entities and events).
  • Spatial analysis. This is the registration and layering of 3D data sets and creation of 3D static and dynamic models from all-source evidence. These capabilities are often met by commercial geospatial information system and computer-aided design (CAD) software.
  • Functional analysis. This is the analysis of processes and expected observables (e.g., manufacturing, business, and military operations, social networks and organizational analysis, and traffic analysis).

These tools aid the analyst in five key analytic tasks:

  1. Correlation: detection and structuring of relationships or linkages between different entities or events in time, space, function, or interaction; association of different reports or content related to a common entity or event;
  2. Combination: logical, functional, or mathematical joining of related evidence to synthesize a structured argument, process, or quantitative estimate;
  3. Anomaly detection: detection of differences between expected (or modeled) characteristics of a target;
  4. Change detection: detection of changes in a target over time—the changes may include spectral, spatial, or other phenomenological changes;
  5. Construction: synthesis of a model or simulation of entities or events and their interactions based upon evidence and conjecture.

Sensemaking tools support the exploration, evaluation, and refinement of alternative hypotheses and explanations of the data. Argumentation structuring, modeling, and simulation tools in this category allow analysts to be immersed in their hypotheses and share explicit representations with other collaborators. This immersion process allows the analytic team to create shared meaning as they experience the alternative explanations.

Decision support (judgment) tools assist analytic decision making by explicitly estimating and comparing the consequences and relative merits of alternative decisions.

These tools include models and simulations that permit the analyst to create and evaluate alternative COAs and weigh the decision alternatives against objective decision criteria. Decision support systems (DSSs) apply the principles of probability to express uncertainty and decision theory to create and assess attributes of decision alternatives and quantify the relative utility of alternatives. Normative, or decision-analytic DSSs, aid the analyst in structuring the decision problem and in computing the many factors that lead from alternatives to quantifiable attributes and resulting utilities. These tools often relate attributes to utility by influence diagrams and compute utilities (and associated uncertainties) using Bayes networks.

The tools progressively move from data as the object of analysis (for exploration) to clusters of related information, to hypotheses, and finally on to decisions, or analytic judgments.

intelligence workflow management software can provide a means to organize the process by providing the following functions:

  • Requirements and progress tracking: maintains list of current intelligence requirements, monitors tasking to meet the requirements, links evidence and hypotheses to those requirements, tracks progress toward meeting requirements, and audits results;
  • Relevant data linking: maintains ontology of subjects relevant to the intelligence requirements and their relationships and maintains a data- base of all relevant data (evidence);
  • Collaboration directory: automatically locates and updates a directory of relevant subject matter experts as the problem topic develops.

In this example, an intelligence consumer has requested specific intelligence on a drug cartel named “Zehga” to support counter-drug activities in a foreign country. The sequence of one analyst’s use of tools in the example include:

  1. The process begins with synchronous collaboration with other analysts to discuss the intelligence target (Zehga) and the intelligence requirements to understand the cartel organization structure, operations, and finances. The analyst creates a peer-to-peer collaborative workspace that contains requirements, essential elements of information (EEIs) needed, current intelligence, and a directory of team members before inviting additional counter-drug subject matter experts to the shared space.
  2. The analyst opens a workflow management tool to record requirements, key concepts and keywords, and team members; the analyst will link results to the tool to track progress in delivering finished intelligence. The tool is also used to request special tasking from technical collectors (e.g., wiretaps) and field offices.
  3. Once the problem has been externalized in terms of requirements and EEIs needed, the sources and databases to be searched are selected (e.g., country cables, COMINT, and foreign news feeds and archives). Key concepts and keywords are entered into IR tools; these tools search current holdings and external sources, retrieving relevant multi- media content. The analyst also sets up monitor parameters to continually check certain sources (e.g., field office cables and foreign news sites) for changes or detections of relevant topics; when detected, the analyst will be alerted to the availability of new information.
  1. The IR tools also create a taxonomy of the collected data sets, structuring the catch into five major categories: Zehga organization (personnel), events, finances, locations, and activities. The taxonomy breaks each category into subcategories of clusters of related content. Documents located in open-source foreign news reports are translated into English, and all documents are summarized into 55-word abstracts.
  2. The analyst views the taxonomy and drills down to summaries, then views the full content of the most critical items to the investigation. Selected items (or hyperlinks) are saved to the shared knowledge base for a local repository relevant to the investigation.
  3. The retrieved catch is analyzed with text mining tools that discover and list the multidimensional associations (linkages or relationships) between entities (people, phone numbers, bank account numbers, and addresses) and events (meetings, deliveries, and crimes).
  4. The linked lists are displayed on a link-analysis tool to allow the analyst to manipulate and view the complex web of relationships between people, communications, finances, and the time sequence of activities. From these network visuals, the analyst begins discovering the Zehga organizational structure, relationships to other drug cartels and financial institutions, and the timeline of explosive growth of the cartel’s influence.
  5. The analyst internalizes these discoveries by synthesizing a Zehga organization structure and associated financial model, filling in the gaps with conjectures that result in three competing hypotheses: a centralized model, a federated model, and a loose network model. These models are created using a standard financial spreadsheet and a net- work relationship visualization tool. The process of creating these hypotheses causes the analyst to frequently return to the knowledge base to review retrieved data, to issue refined queries to fill in the gaps, and to further review the results of link analyses. The model synthesis process causes the analyst to internalize impressions of confidence, uncertainty, and ambiguity in the evidence, and the implications of potential missing or negative evidence. Here, the analyst ponders the potential for denial and deception tactics and the expected subtle “sprignals” that might appear in the data.
  6. An ACH matrix is created to compare the accrued evidence and argumentation structures supporting each of the competing models. At any time, this matrix and the associated organizational-financial models summarize the status of the intelligence process; these may be posted on the collaboration space and used to identify progress on the work- flow management tool.
  7. The analyst further internalizes the situation by applying a decision sup- port tool to consider the consequences or implications of each model on counter-drug policy courses of action relative to the Zehga cartel.
  8. Once the analyst has reached a level of confidence to make objective analytic judgments about hypotheses, results can be digitally published to the requesting consumers and to the collaborative workgroup to begin socialization—and another cycle to further refine the results. (The next section describes the digital publication process.)

 

Commercial tool suites such as Wincite’s eWincite, Wisdom Builder’s Wisdombuilder, and Cipher’s Knowledge. Works similarly integrate text-based tools to support the competitive intelligence analysis.

Tacit capture and collaborative filtering monitors the activities of all users on the network and uses statistical clustering methods to identify the emergent clusters of interest that indicate communities of common practice. Such filtering could identify and alert these two analysts to other ana- lysts that are converging on a common suspect from other directions (e.g., money laundering and drug trafficking).

7.4 Intelligence Production, Dissemination, and Portals

The externalization-to-internalization workflow results in the production of digital intelligence content suitable for socialization (collaboration) across users and consumers. This production and dissemination of intelligence from KM enterprises has transitioned from static, hardcopy reports to dynamically linked digital softcopy products presented on portals.

Digital production processes employ content technologies that index, structure, and integrate fragmented components of content into deliverable products. In the intelligence context, content includes:

  1. Structured numerical data (imagery, relational database queries) and text [e.g., extensible markup language (XML)-formatted documents] as well as unstructured information (e.g., audio, video, text, and HTML content from external sources);
  2. Internally or externally created information;
  3. Formally created information (e.g., cables, reports, and imagery or signals analyses) as well as informal or ad hoc information (e.g., e-mail, and collaboration exchanges);
  4. Static or active (e.g., dynamic video or even interactive applets) content.

The key to dynamic assembly is the creation and translation of all content to a form that is understood by the KM system. While most intelligence data is transactional and structured (e.g., imagery, signals, MASINT), intelligence and open-source documents are unstructured. While the volume of open-source content available on Internet and closed-source intelligence content grows exponentially, the content remains largely unstructured.

Content technology pro- vides the capability to transform all-sources to a common structure for dynamic integration and personalized publication. The XML offers a method of embed- ding content descriptions by tagging each component with descriptive information that allows automated assembly and distribution of multimedia content

Intelligence standards being developed include an intelligence information markup language (ICML) specification for intelligence reporting and metadata standards for security, specifying digital signatures (XML-DSig), security/encryption (XML-Sec), key management (XML-KMS), and information security marking (XML-ISM) [12]. Such tagging makes the content interoperable; it can be reused and automatically integrated in numerous ways:

  • Numerical data may be correlated and combined.
  • Text may be assembled into a complete report (e.g., target abstract, tar- getpart1, targetpart2, …, related targets, most recent photo, threat summary, assessment).
  • Various formats may be constructed from a single collection of contents to suit unique consumer needs (e.g., portal target summary format, personal digital assistant format, or pilot’s cockpit target folder format).

a document object model (DOM) tree can be created from the integrated result to transform the result into a variety of formats (e.g., HTML or PDF) for digital publication.

The analysis and single-source publishing architecture adopted by the U.S. Navy Command 21 K-Web (Figure 7.7) illustrates a highly automated digital production process for intelligence and command applications [14]. The production workflow in the figure includes the processing, analysis, and dissemination steps of the intelligence cycle:

  1. Content collection and creation (processing and analysis). Both quantitative technical data and unstructured text are received, and content is extracted and tagged for subsequent processing. This process is applied to legacy data (e.g., IMINT and SIGINT reports), structured intelligence message traffic, and unstructured sources (e.g., news reports and intelligence e-mail). Domain experts may support the process by creating metadata in a predefined XML metadata format to append to audio, video, or other nontext sources. Metadata includes source, pedigree, time of collection, and format information. New content created by analysts is entered in standard XML DTD templates.
  2. Content applications. XML-tagged content is entered in the data mart, where data applications recognize, correlate, consolidate, and summarize content across the incoming components. A correlation agent may, for example, correlate all content relative to a new event or entity and pass the content on to a consolidation agent to index the components for subsequent integration into an event or target report. The data (and text) fusion and mining functions described in the next chapter are performed here.
  3. Content management-product creation (production). Product templates dictate the aggregation of content into standard intelligence products: warnings, current intelligence, situation updates, and target status. These composite XML-tagged products are returned to the data mart.
  4. Content publication and distribution. Intelligence products are personalized in terms of both style (presentation formats) and distribution (to users with an interest in the product). Users may explicitly define their areas of interests, or the automated system may monitor user activities (through queries, collaborative discussion topics, or folder names maintained) to implicitly estimate areas of interest to create a user’s personal profile. Presentation agents choose from the style library and user profiles to create distribution lists for content to be delivered via e-mail, pushed to users’ custom portals, or stored in the data mart for subsequent retrieval. The process of content syndication applies an information and content exchange (ICE) standard to allow a single product to be delivered in multiple styles and to provide automatic content update across all users.

The user’s single entry point is a personalized portal (or Web portal) that provides an organized entry into the information available on the intelligence enterprise.

7.5 Human-Machine Information Transactions and Interfaces

In all of the services and tools described in the previous sections, the intelligence analyst interacts with explicitly collected data, applying his or her own tacit knowledge about the domain of interest to create estimates, descriptions, expla- nations, and predictions based on collected data. This interaction between the analyst and KM systems requires efficient interfaces to conduct the transaction between the analyst and machine.

7.5.1 Information Visualization

Edward Tufte introduced his widely read text Envisioning Information with the prescient observation that, “Even though we navigate daily through a perceptual world of three dimensions and reason occasionally about higher dimensional arena with mathematical ease, the world portrayed on our information displays is caught up in the two-dimensionality of the flatlands of paper and video screen”. Indeed, intelligence organizations are continually seeking technologies that will allow analysts to escape from this flatland.

The essence of visualization is to provide multidimensional information to the analyst in a form that allows immediate understanding by this visual form of thinking.

A wide range of visualization methods are employed in analysis (Table 7.6) to allow the user to:

  • Perceive patterns and rapidly grasp the essence of large complex (multi-dimensional) information spaces, then navigate or rapidly browse through the space to explore its structure and contents;
  • Manipulate the information and visual dimensions to identify clusters of associated data, patterns of linkages and relationships, trends (temporal behavior), and outlying data;
  • Combine the information by registering, mathematically or logically jointing (fusing), or overlaying.

 

7.5.2 Analyst-Agent Interaction

Intelligent software agents tailored to support knowledge workers are being developed to provide autonomous automated support in the information retrieval and exploration tasks introduced throughout this chapter. These collaborative information agents, operating in multiagent networks, provide the

potential to amplify the analyst’s exploration of large bodies of data, as they search, organize, structure, and reason about findings before reporting results. Information agents are being developed to perform a wide variety of functions, as an autonomous collaborating community under the direction of a human analyst, including:

  • Personal information agents (PIMs) coordinate an analyst’s searches and organize bookmarks to relevant information; like a team of librarians, the PIMs collect, filter, and recommend relevant materials for the analyst.
  • Brokering agents mediate the flow of information between users and sources (databases, external sources, collection processors); they can also act as sentinels to monitor sources and alert users to changes or the availability of new information.
  • Planning agents accept requirements and create plans to coordinate agents and task resources to meet user goals.

agents also offer the promise of a means of interaction with the analyst that emulates face- to-face conversation, and will ultimately allow information agents to collaborate as (near) peers with individuals and teams of human analysts. These interactive agents (or avatars) will track the analyst (or analytic team) activities and needs to conduct dialogue with the analysts—in terms of the semantic concepts familiar to the topic of interest—to contribute the following kinds of functions:

  • Agent conversationalists that carry on dialogue to provide high- bandwidth interactions that include multimodal input from the analyst (e.g., spoken natural language, keyboard entries, and gestures and gaze) and multimodal replies (e.g., text, speech, and graphics). Such conversationalists will increase “discussions” about concepts, relevant data, and possible hypotheses [23].
  • Agent observers that monitor analyst activity, attention, intention, and task progress to converse about suggested alternatives, potentials for denial and deception, or warnings that the analyst’s actions imply cognitive shortcomings (discussed in Chapter 6) may be influencing the analysis process.
  • Agent contributors that will enter into collaborative discussions to interject alternatives, suggestions, or relevant data.

The integration of collaborating information agents and information visualization technologies holds the promise of more efficient means of helping analysts find and focus on relevant information, but these technologies require greater maturity to manage uncertainty, dynamically adapt to the changing ana- lytic context, and understand the analyst’s intentions.

7.6 Summary

The analytic workflow requires a constant interaction between the cognitive and visual-perceptive processes in the analyst’s mind and the explicit representations of knowledge in the intelligence enterprise.

 

8

Explicit Knowledge Capture and Combination

In the last chapter, we introduced analytic tools that allow the intelligence analyst to interactively correlate, compare, and combine numerical data and text to discover clusters and relationships among events and entities within large databases. These interactive combination tools are considered to be goal-driven processes: the analyst is driven by a goal to seek solutions within the database, and the reasoning process is interactive with the analyst and machine in a common reasoning loop. This chapter focuses on the largely automated combination processes that tend to be data driven: as data continuously arrives from intelligence sources, the incoming data drives a largely automated process that continually detects, identifies, and tracks emerging events of interest to the user. These parallel goal-driven and data-driven processes were depicted as complementary combination processes in the last chapter

In all cases, the combination processes help sources to cross-cue each other, locate and identify target events and entities, detect anomalies and changes, and track dynamic targets.

8.1 Explicit Capture, Representation, and Automated Reasoning

The term combination introduced by Nonaka and Takeuchi in the knowledge-creation spiral is an abstraction to describe the many functions that are performed to create knowledge, such as correlation, association, reasoning, inference, and decision (judgment). This process requires the explicit representation of knowledge; in the intelligence application this includes knowledge about the world (e.g., incoming source information), knowledge of the intelligence domain (e.g., characteristics of specific weapons of mass destruction and their production and deployment processes), and the more general procedural knowledge about reasoning.

 

The DARPA Rapid Knowledge Formation (RKF) project and its predecessor, the High-Performance Knowledge Base project, represent ambitious research aimed at providing a robust explicit knowledge capture, representation, and combination (reasoning) capability targeted toward the intelligence analysis application [1]. The projects focused on developing the tools to create and manage shared, reusable knowledge bases on specific intelligence domains (e.g., biological weapons subjects); the goal is to enable creation of over one million axioms of knowledge per year by collaborating teams of domain experts. Such a knowledge base requires a computational ontology—an explicit specification that defines a shared conceptualization of reality that can be used across all processes.

The challenge is to encode knowledge through the instantiation and assembly of generic knowledge components that can be readily entered and understood by domain experts (appropriate semantics) and provide sufficient coverage to encompass an expert-level of understanding of the domain. The knowledge base must have fundamental knowledge of entities (things that are), events (things that happen), states (descriptions of stable event characteristics), and roles (entities in the context of events). It must also describe knowledge of the relationships between (e.g. cause, object of, part of, purpose of, or result of) and properties (e.g., color, shape, capability, and speed) of each of these.

8.2 Automated Combination

Two primary categories of the combination processes can be distinguished, based on their approach to inference; each is essential to intelligence processing and analysis.

The inductive process of data mining discovers previously unrecognized patterns in data (new knowledge about characteristics of an unknown pattern class) by searching for patterns (relationships in data) that are in some sense “interesting.” The discovered candidates are usually presented to human users for analysis and validation before being adopted as general cases [3].

The deductive process, data fusion, detects the presence of previously known patterns in many sources of data (new knowledge about the existence of a known pattern in the data). This is performed by searching for specific pattern templates in sensor data streams or databases to detect entities, events, and complex situations comprised of interconnected entities and events.

data sets used by these processes for knowledge creation are incomplete, dynamic, and contain data contaminated by noise. These factors make the following process characteristics apply:

  • Pattern descriptions. Data mining seeks to induce general pattern descriptions (reference patterns, templates, or matched filters) to characterize data understood, while data fusion applies those descriptions to detect the presence of patterns in new data.
  • Uncertainty in inferred knowledge. The data and reference patterns are uncertain, leading to uncertain beliefs or knowledge.
  • Dynamic state of inferred knowledge. The process is sequential and inferred knowledge is dynamic, being refined as new data arrives.
  • Use of domain knowledge. Knowledge about the domain (e.g., constraints, context) may be used in addition to collected raw intelligence data.

8.2.1 Data Fusion

Data fusion is an adaptive knowledge creation process in which diverse elements of similar or dissimilar observations (data) are aligned, correlated, and combined into organized and indexed sets (information), which are further assessed to model, understand, and explain (knowledge) the makeup and behavior of a domain under observation.

The data-fusion process seeks to explain an adversary (or uncooperative) intelligence target by abstracting the target and its observable phenomena into a causal or relationship model, then applying all-source observation to detect entities and events to estimate the properties of the model. Consider the levels of representation in the simple target-observer processes in Figure 8.2 [6]. The adversary leadership holds to goals and values that create motives; these motives, combined with beliefs (created by perception of the current situation), lead to intentions. These intentions lead to plans and responses to the current situation; from alternative plans, decisions are made that lead to commands for action. In a hierarchical military, or a networked terrorist organization, these commands flow to activities (communication, logistics, surveillance, and movements). Using the three domains of reality terminology introduced in Chapter 5, the motive-to-decision events occur in the adversary’s cognitive domain with no observable phenomena.

The data-fusion process uses observable evidence from both the symbolic and physical domains to infer the operations, communications, and even the intentions of the adversary.

The emerging concept of effects-based military operations (EBO) requires intelligence products that provide planners with the ability to model the various effects influencing a target that make up a complex system. Planners and opera- tors require intelligence products that integrate models of the adversary physical infrastructure, information networks, and leadership and decision making

The U.S. DoD JDL has established a formal process model of data fusion that decomposes the process into five basic levels of information-refining processes (based upon the concept of levels of information abstraction) [8]:

  • Level 0: Data (or subobject) refinement. This is the correlation across signals or data (e.g., pixels and pulses) to recognize components of an object and the correlation of those components to recognize an object.
  • Level 1: Object refinement. This is the correlation of all data to refine individual objects within the domain of observation. (The JDL model uses the term object to refer to real-world entities, however, the subject of interest may be a transient event in time as well.)
  • Level 2: Situation refinement. This is the correlation of all objects (information) within the domain to assess the current situation.
  • Level 3: Impact refinement. This is the correlation of the current situation with environmental and other constraints to project the meaning of the situation (knowledge). The meaning of the situation refers to its implications to the user: threat, opportunity, change, or consequence.
  • Level 4: Process refinement. This is the continual adaptation of the fusion process to optimize the delivery of knowledge against a defined mission objective.

 

8.2.1.1 Level 0: Data Refinement

Raw data from sensors may be calibrated, corrected for bias and gain errors, limited (thresholded), and filtered to remove systematic noise sources. Object detection may occur at this point—in individual sensors or across multiple sensors (so-called predetection fusion). The object-detection process forms observation reports that contain data elements such as observation identifier, time of measurement, measurement or decision data, decision, and uncertainty data.

8.2.1.2 Level 1: Object Refinement

Sensor and source reports are first aligned to a common spatial reference (e.g., a geographic coordinate system) and temporal reference (e.g., samples are propagated forward or backward to a common time.) These alignment transformations place the observations in a common time-space coordinate system to allow an association process to determine which observations from different sensors have their source in a common object. The association process uses a quantitative correlation metric to measure the relative similarity between observations. The typical correlation metric, C, takes on the following form:

n
c = ∑wi xi

i1=1

Where;
wi = weighting coefficient for attribute xi.

xi = ith correlation attribute metric.

The correlation metric may be used to make a hard decision (an association), choosing the most likely parings of observations, or a deferred decision, assigning more that one hypothetical paring and deferring a hard decision until more observations arrive. Once observations have been associated, two functions are performed on each associated set of measurements for common object:

  1. Tracking. For dynamic targets (vehicles or aircraft), the current state of the object is correlated with previously known targets to determine if the observation can update a model of an existing model (track). If the newly associated observations are determined to be updates to an existing track, the state estimation model for the track (e.g., a Kalman filter) is updated; otherwise, a new track is initiated.
  2. Identification. All associated observations are used to determine if the object identity can be classified to any one of several levels (e.g., friend/foe, vehicle class, vehicle type or model, or vehicle status or intent).

8.2.1.3 Level 2: Situation Refinement

All objects placed in space-time context in an information base are analyzed to detect relationships based on spatial or temporal characteristics. Aggregate sets of objects are detected by their coordinated behavior, dependencies, proximity, common point of origin, or other characteristics using correlation metrics with high-level attributes (e.g., spatial geometries or coordinated behavior). The synoptic understanding of all objects, in their space-time context, provides situation knowledge, or awareness.

8.2.1.4 Level 3: Impact (or Threat) Refinement

Situation knowledge is used to model and analyze feasible future behaviors of objects, groups, and environmental constraints to determine future possible out- comes. These outcomes, when compared with user objectives, provide an assessment of the implications of the current situation. Consider, for example, a simple counter-terrorism intelligence situation that is analyzed in the sequence in Figure 8.4.

8.2.1.5 Level 4: Process Refinement

This process provides feedback control of the collection and processing activities to achieve the intelligence requirements. At the top level, current knowledge (about the situation) is compared to the intelligence requirements required to achieve operational objectives to determine knowledge shortfalls. These shortfalls are parsed, downward, into information, then data needs, which direct the future acquisition of data (sensor management) and the control of internal processes. Processes may be refined, for example, to focus on certain areas of interest, object types, or groups. This forms the feedback loop of the data-fusion process.

8.2.2 Data Mining

Data mining is the process by which large sets of data (or text in the specific case of text mining) are cleansed and transformed into organized and indexed sets (information), which are then analyzed to discover hidden and implicit, but previously undefined, patterns. These patterns are reviewed by domain experts to determine if they reveal new understandings of the general structure and relationships (knowledge) in the data of a domain under observation.

The object of discovery is a pattern, which is defined as a statement in some language, L, that describes relationships in subset Fs of a set of data, F, such that:

  1. The statement holds with some certainty, c;
  2. The statement is simpler (in some sense) than the enumeration of all facts in Fs [13].

This is the inductive generalization process described in Chapter 5. Mined knowledge, then, is formally defined as a pattern that is interesting, according to some user-defined criterion, and certain to a user-defined measure of degree.

In application, the mining process is extended from explanations of limited data sets to more general applications (induction). In this example, a relationship pattern between three terrorist cells may be discovered that includes intercommunication, periodic travel to common cities, and correlated statements posted on the Internet.

Data mining (also called knowledge discovery) is distinguished from data fusion by two key characteristics:

  1. Inference method. Data fusion employs known patterns and deductive reasoning, while data mining searches for hidden patterns using inductive reasoning.
  2. Temporal perspective. The focus of data fusion is retrospective (determining current state based on past data), while data mining is both retrospective and prospective—focused on locating hidden patterns that may reveal predictive knowledge.

Beginning with sensors and sources, the data warehouse is populated with data, and successive functions move the data toward learned knowledge at the top. The sources, queries, and mining processes may be refined, similar to data fusion. The functional stages in the figure are described next.

  • Data warehouse. Data from many sources are collected and indexed in the warehouse, initially in the native format of the source. One of the chief issues facing many mining operations is the reconciliation of diverse database formats that have different formats (e.g., field and record sizes and parameter scales), incompatible data definitions, and other differences. The warehouse collection process (flow in) may mediate between these input sources to transform the data before storing in common form [20].
  • Data cleansing. The warehoused data must be inspected and cleansed to identify and correct or remove conflicts, incomplete sets, and incompatibilities common to combined databases. Cleansing may include several categories of checks:
  1. Uniformity checks verify the ranges of data, determine if sets exceed limits, and verify that formats versions are compatible.
  2. Completeness checks evaluate the internal consistency of data sets to ensure, for example, that aggregate values are consistent with individual data components (e.g., “verify that total sales is equal to sum of all sales regions, and that data for all sales regions is present”).
  3. Conformity checks exhaustively verify that each index and reference exists.
  4. Genealogy checks generate and check audit trails to primitive data to permit analysts to drill down from high-level information.
  • Data selection and transformation. The types of data that will be used for mining are selected on the basis of relevance. For large operations, ini- tial mining may be performed on a small set, then extended to larger sets to check for the validity of abducted patterns. The selected data may then be transformed to organize all data into common dimensions and to add derived dimensions as necessary for analysis.
  • Data mining operations. Mining operations may be performed in a supervised manner in which the analyst presents the operator with a selected set of training data, in which the analyst has manually determined the existence of pattern classes. Alternatively, the operation may proceed without supervision, performing an automated search for patterns. A number of techniques are available (Table 8.4), depending upon the type of data and search objectives (interesting pattern types).
  • Discovery modeling. Prediction or classification models are synthesized to fit the data patterns detected. This is the proscriptive aspect of mining: modeling the historical data in the database (the past) to provide a model to predict the future. The model attempts to abduct a generalized description that explains discovered patterns of interest and, using statistical inference from larger volumes of data, seeks to induct generally applicable models. Simple extrapolation, time-series trends, com- plex linked relationships, and causal mathematical models are examples of models created.
  • Visualization. The analyst uses visualization tools that allow discovery of interesting patterns in the data. The automated mining operations cue the operator to discovered patterns of interest (candidates), and the analyst then visualizes the pattern and verifies if, indeed, it contains new and useful knowledge. OLAP refers to the manual visualization process in which a data manipulation engine allows the analyst to create data “views” from the human perspective and to perform the following categories of functions:
  1. Multidimensional analysis of the data across dimensions, through relationships (e.g., command hierarchies and transaction networks) and in perspectives natural to the analyst (rather that inherent in the data);
  2. Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
  3. Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
  4. Reach through from information levels to the underlying raw data, including reaching beyond the information base, back to raw data by the audit trail generated in genealogy checking;
  5. Modeling of hypothetical explanations of the data, in terms of trend analysis, extrapolations.
  • Refinement feedback. The analyst may refine the process, by adjusting the parameters that control the lower level processes, as well as requesting more or different data on which to focus the mining operations.

 

 

8.2.3 Integrated Data Fusion and Mining

In a practical intelligence application, the full reasoning process integrates the discovery processes of data mining with the detection processes of data fusion. This integration helps the analyst to coordinate learning about new signatures and patterns and apply that new knowledge, in the form of templates, to detect other cases of the situation. A general application of these integrated tools can support the search for nonliteral target signatures, the use of those learned and validated signatures to detect new targets [21]. (Nonliteral target signatures refer to those signatures that extend across many diverse observation domains and are not intuitive or apparent to analysts, but may be discovered only by deeper analysis of multidimensional data.)

The mining component searches the accumulated database of sensor data, with discovery processes focused on relationships that may have relevance to the nonliteral target sets. Discovered models (templates) of target objects or processes are then tested, refined, and verified using the data-fusion process. Finally, the data-fusion process applies the models deductively for knowledge detection in incoming sensor data streams.

8.3 Intelligence Modeling and Simulation

Modeling activities take place in externalization (as explicit models are formed to describe mental models), combination (as evidence is combined and compared with the model), and in internalization (as the analyst ponders the matches, mismatches, and incongruities between evidence and model).

While we have used the general term model to describe any abstract representation, we now distinguish here between two implementations made by the modeling and simulation (M&S) community. Models refer to physical, mathematical, or otherwise logical representations of systems, entities, phenomena, or processes, while simulations refer to those methods to implement models over time (i.e., a simulation is a time-dynamic model)

Models and simulations are inherently collaborative; their explicit representations (versus mental models) allow analytic teams to collectively assemble, and explore the accumulating knowledge that they represent. They support the analysis-synthesis process in multiple ways:

  • Evidence marshaling. As described in Chapter 5, models and simulations provide the framework for which inference and evidence is assembled; they provide an audit trail of reasoning.
  • Exploration. Models and simulations also provide a means for analysts to be immersed in the modeled situation, its structure, and dynamics. It is a tool for experimentation and exploration that provides deeper understanding to determine necessary confirming or falsifying evidence, to evaluate potential sensing measures, and to examine potential denial and deception effects.
  • Dynamic process tracking. Simulations model the time-dynamic behavior of targets to forecast future behavior, compare with observations, and refine the behavior model over time. Dynamic models provide the potential for estimation, anticipation, forecasting, and even prediction (these words imply increasing accuracy and precision in their estimates of future behavior).
  • Explanation. Finally, the models and simulations provide a tool for presenting alternative hypotheses, final judgments, and rationale.

chance favors the prepared prototype: models and simulations can and should be media to create and capture surprise and serendipity

The table (8.5) illustrates independent models and simulations in all three domains, however these domains can be coupled to create a robust model to explore how an adversary thinks (cognitive domain), transacts (e.g., finances, command, and intelligence flows), and acts (physical domain).

A recent study of the advanced methods required to support counter-terrorism analysis recommended the creation of scenarios using top-down synthesis (manual creation by domain experts and large-scale simulation) to create synthetic evidence for comparison with real evidence discovered by bottom-up data mining.

8.3.1 M&S for I&W

The challenge of I&W demands predictive analysis, where “the analyst is looking at something entirely new, a discontinuous phenomenon, an outcome that he or she has never seen before. Furthermore, the analyst only sees this new pat- tern emerge in bits and pieces”

The tools monitor world events to track the state and time-sequence of state transitions for comparison with indicators of stress. These analytic tools apply three methods to provide indicators to analysts:

  1. Structural indicator matching. Previously identified crisis patterns (statistical models) are matched to current conditions to seek indications in background conditions and long-term trends.
  2. Sequential tracking models. Simulations track the dynamics of events to compare temporal behavior with statistical conflict accelerators in cur- rent situations that indicate imminent crises.
  3. Complex behavior analysis. Simulations are used to support inductive exploration of the current situation, so the analyst can examine possible future scenarios to locate potential triggering events that may cause instability (though not in prior indicator models).

A general I&W system architecture (Figure 8.7), organized following the JDL data-fusion structure, accepts incoming news feed text reports of current situations and encodes the events into a common format (by human or automated coding). The event data is encoded into time-tagged actions (assault, kid- nap, flee, assassinate), proclamations (threaten, appeal, comment) and other pertinent events from relevant actors (governments, NGOs, terror groups). The level 1 fusion process correlates and combines similar reports to produce a single set of current events organized in time series for structural analysis of back- ground conditions and sequential analysis of behavioral trends by groups and interactions between groups. This statistical analysis is an automatic target-recognition process, comparing current state and trends with known clusters of unstable behaviors. The level 2 process correlates and aggregates individual events into larger patterns of behavior (situations). A dynamic simulation tracks the current situation (and is refined by the tracking loop shown) to enable the analyst to explore future excursions from the present condition. By analysis of the dynamics of the situation, the analyst can explore a wide range of feasible futures, including those that may reveal surprising behavior that is not intuitive—increasing the analyst’s awareness of unstable regions of behavior or the potential of subtle but potent triggering events.

8.3.2 Modeling Complex Situations and Human Behavior

The complex behavior noted in the prior example may result from random events, human free will, or the nonlinearity introduced by the interactions of many actors. The most advanced applications of M&S are those that seek to model environments (introduced in Section 4.4.2) that exhibit complex behaviors—emergent behaviors (surprises) that are not predictable from the individual contributing actors within the system. Complexity is the property of a system that prohibits the description of its overall behavior even when all of the components are described completely. Complex environments include social behaviors of significant interest to intelligence organizations: populations of nation states, terrorist organizations, military commands, and foreign leaders [32]. Perhaps the grand challenge of intelligence analysis is to understand an adversary’s cognitive behavior to provide both warning and insight into the effects of alternative preemptive actions that may avert threats.

Nonlinear mathematical solutions are intractable for most practical problems, and the research community has applied dynamic systems modeling and agent-based simulation (ABS) to represent systems that exhibit complex behavior [34]. ABS research is being applied to the simulation of a wide range of organizations to assess intent, decision making and planning (cognitive), com- mand and finances (symbolic), and actions (physical). The applications of these simulations include national policies [35], military C2 [36], and terrorist organizations [37].

9
The Intelligence Enterprise Architecture

The processing, analysis, and production components of intelligence operations are implemented by enterprises—complex networks of people and their business processes, integrated information and communication systems and technology components organized around the intelligence mission. As we have emphasized throughout this text, an effective intelligence enterprise requires more than just these components; the people require a collaborative culture, integrated electronic networks require content and contextual compatibility, and the implementing components must constantly adapt to technology trends to remain competitive. The effective implementation of KM in such enterprises requires a comprehensive requirements analysis and enterprise design (synthesis) approach to translate high-level mission statements into detailed business processes, networked systems, and technology implementations.

9.1 Intelligence Enterprise Operations

In the early 1990s the community implemented Intelink, a communitywide network to allow the exchange of intelligence between agencies that maintained internal compartmented networks [2]. The DCI vision for “a unified IC optimized to provide a decisive information advantage…” in the mid-1990s led to the IC CIO to establish an IC Operational Network (ICON) office to perform enterprise architecture analysis and engineering to define the system and communication architectures in order to integrate the many agency networks within the IC [3]. This architecture is required to provide the ability to collaborate securely and synchronously from the users’ desktops across the IC and with customers (e.g., federal government intelligence consumers), partners (component agencies of the IC), and suppliers (intelligence data providers within and external to the IC).

The undertaking illustrates the challenge of implementing a mammoth intelligence enterprise that is comprised of four components:

  1. Policies. These are the strategic vision and derivative policies that explicitly define objectives and the approaches to achieve the vision.
  1. Operational processes. These are collaborative and operationally secure processes to enable people to share knowledge and assets securely and freely across large, diverse, and in some cases necessarily compartmented organizations. This requires processes for dynamic modification of security controls, public key infrastructure, standardized intelligence product markup, the availability of common services, and enterprisewide search, collaboration, and application sharing.
  2. System (network). This is an IC system for information sharing (ICSIS) that includes an agreed set of databases and applications hosted within shared virtual spaces within agencies and across the IC. The system architecture (Figure 9.1) defines three virtual collaboration spaces, one internal to each organization and a second that is accessible across the community (an intranet and extranet, respectively). The internal space provides collaboration at the Special Compartmented Intelligence (SCI) level within the organization; owners tightly control their data holdings (that are organizationally sensitive). The community space enables IC-wide collaboration at the SCI level; resource protection and control is provided by a central security policy. A separate collateral community space provides a space for data shared with DoD and other federal agencies.
  1. The enterprise requires the integration of large installed bases of legacy components and systems with new technologies. The integration requires definition of standards (e.g., metadata, markup languages, protocols, and data schemas) and the plans for incremental technology transitions.

9.2 Describing the Enterprise Architecture

Two major approaches to architecture design that are immediately applicable to the intelligence enterprise have been applied by the U.S. DoD and IC for intelligence and related applications. Both approaches provide an organizing method- ology to assure that all aspects of the enterprise are explicitly defined, analyzed, and described to assure compatibility, completeness, and traceability back to the mission objectives. The approaches provide guidance to develop a comprehensive abstract model to describe the enterprise; the model may be understood from different views in which the model is observed from a particular perspective (i.e., the perspectives of the user or developer) and described by specific products that makeup the viewpoint.

The first methodology is the Zachman Architecture FrameworkTM, developed by John Zachman in the late1980s while at IBM. Zachman pioneered a concept of multiple perspectives (views) and descriptions (viewpoints) to completely define the information architecture [6]. This framework is organized as a matrix of 30 perspective products, defined by the cross product of two dimensions:

  1. Rows of the matrix represent the viewpoints of architecture stakeholders: the owner, planner, designer, builder (e.g., prime contractor), and subcontractor. The rows progress from higher level (greater degree of abstraction) descriptions by the owner toward lower level (details of implementation) by the subcontractor.
  2. Columns represent the descriptive aspects of the system across the dimensions of data handled, functions performed, network, people involved, time sequence of operations, and motivation of each stakeholder.

Each cell in the framework matrix represents a descriptive product required to describe an aspect of the architecture.

 

This framework identifies a single descriptive product per view, but permits a wide range of specific descriptive approaches to implement the products in each cell of the framework:

  • Mission needs statements, value propositions, balanced scorecard, and organizational model methods are suitable to structure and define the owner’s high-level view.
  • Business process modeling, the object-oriented Unified Modeling Language (UML), or functional decomposition using Integrated Definition Models (IDEF) explicitly describe entities and attributes, data, functions, and relationships. These methods also support enterprise functional simulation at the owner and designer level to permit evaluation of expected enterprise performance.
  • Detailed functional standards (e.g., IEEE and DoD standards specification guidelines) provide guidance to structure detailed builder- and subcontractorlevel descriptions that define component designs.

The second descriptive methodology is the U.S. DoD Architecture Frame- work (formally the C4ISR Architecture Framework), which defines three inter- related perspectives or architectural views, each with a number of defined products [7]. The three interrelated views (Figure 9.2) are as follows:

    1. Operational architecture is a description (often graphical) of the operational elements, intelligence business processes, assigned tasks, work- flows, and information flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and what tasks are supported by these information exchanges.
    2. Systems architecture is a description, including graphics, of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users and specifies system and component performance parameters. It is constructed to satisfy operational architecture requirements per standards defined in the technical architecture. This architecture view shows how multiple systems within a subject area link and interoperate and may describe the internal construction or operations of particular systems within the architecture.
    3. Technical architecture is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. The technical architecture identifies the services, interfaces, standards, and their relationships. It provides the technical guidelines for implementation of systems upon which engineering specifications are based, common building blocks are built, and product lines are developed.

 

 

Both approaches provide a framework to decompose the enterprise into a comprehensive set of perspectives that must be defined before building; following either approach introduces the necessary discipline to structure the enterprise architecture design process.

The emerging foundation for enterprise architecting using framework models is distinguished from the traditional systems engineering approach, which focuses on optimization, completeness, and a build-from-scratch originality [11]. Enterprise (or system) architecting recognizes that most enterprises will be constructed from a combination of existing and new integrating components:

  • Policies, based on the enterprise strategic vision;
  • People, including current cultures that must change to adopt new and changing value propositions and business processes;
  • Systems, including legacy data structures and processes that must work with new structures and processes until retirement;
  • IT, including legacy hardware and software that must be integrated with new technology and scheduled for planned retirement.

The adoption of the architecture framework models and system architecting methodologies are developed in greater detail in a number of foundational papers and texts [12].

9.3 Architecture Design Case Study: A Small Competitive Intelligence Enterprise

The enterprise architecture design principles can be best illustrated by developing the architecture description for a fictional small-scale intelligence enterprise: a typical CI unit for a Fortune 500 business. This simple example defines the introduction of a new CI unit, deliberately avoiding the challenges of introducing significant culture change across an existing organization and integrating numerous legacy systems.

The CI unit provides legal and ethical development of descriptive and inferential intelligence products for top management to assess the state of competitors’ businesses and estimate their future actions within the current marketplace. The unit is not the traditional marketing function (which addresses the marketplace of customers) but focuses specifically on the competitive environment, especially competitors’ operations, their business options, and likely decision-making actions.

The enterprise architect recognizes the assignment as a corporate KM project that should be evaluated against O’Dell and Grayson’s four-question checklist for KM projects [14]:

  1. Select projects to advance your business performance. This project will enhance competitiveness and allow FaxTech to position and adapt its product and services (e.g., reduce cycle time and enhance product development to remain competitive).
  2. Select projects that have a high success probability. This project is small, does not confront integration with legacy systems, and has a high probability of technical success. The contribution of KM can be articulated (to deliver competitive intelligence for executive decision making), there is a champion on the board (the CIO), and the business case (to deliver decisive competitor knowledge) is strong. The small CI unit implementation does not require culture change in the larger Fax- Tech organization—and it may set an example of the benefits of collaborative knowledge creation to set the stage for a larger organization-wide transformation.
  3. Select projects appropriate for exploring emerging technologies. The project is an ideal opportunity to implement a small KM enterprise in FaxTech that can demonstrate intelligence product delivery to top management and can support critical decision making.
  4. Select projects with significant potential to build KM culture and discipline within the organization. The CI enterprise will develop reusable processes and tools that can be scaled up to support the larger organization; the lessons learned in implementation will be invaluable in planning for an organization-wide KM enterprise.

9.3.1 The Value Proposition

The CI value proposition must define the value of competitive intelligence.

The quantitative measures may be difficult to define; the financial return on CI investment measure, for example, requires a careful consideration of how the derived intelligence couples with strategy and impacts revenue gains. Kilmetz and Bridge define a top-level measure of CI return on investment (ROI) metric that considers the time frame of the payback period (t, usually updated quarterly and accumulated to measure the long-term return on strategic decisions) and applies the traditional ROI formula, which subtracts the cost of the CI investment (C CI+I,, the initial implementation cost, plus accumulating quarterly operations costs using net present values) from the revenue gain [17]:

ROICI =∑[(P×Q)−CCI+I]t

The expected revenue gain is estimated by the increase in sales (units sold, Q, multiplied by price, P, in this case) that are attributable to CI-induced decisions. Of course, the difficulty in defining such quantities is the issue of assuring that the gains are uniquely attributable to decisions possible only by CI information [18].

In building the scorecard, the enterprise architect should seek the lessons learned from others, using sources such as the Society for Competitive Intelligence Professionals or the American Productivity and Quality Center

9.3.2 The CI Business Process

The Society of Competitive Intelligence Professionals has defined a CI business cycle that corresponds to the intelligence cycle; the cycle differs by distinguishing primary and published source information, while eliminating the automated processing of technical intelligence sources. The five stages, or business processes, of this high-level business model include:

  1. Planning and direction. The cycle begins with the specific identification of management needs for competitive intelligence. Management defines the specific categories of competitors (companies, alliances) and threats (new products or services, mergers, market shifts, technology discontinuities) for focus and the specific issues to be addressed. The priorities of intelligence needed, routine reporting expectations, and schedules for team reporting enables the CI unit manager to plan specific tasks for analysts, establish collection and reporting schedules, and direct day-to-day operations.
  1. Published source collection. The collection of articles, reports, and financial data from open sources (Internet, news feeds, clipping services, commercial content providers) includes both manual searches by analysts and active, automated searches by software agents that explore (crawl) the networks and cue analysts to rank-ordered findings. This collection provides broad, background knowledge of CI targets; the results of these searches provide cues to support deeper, more focused primary source collection.
  2. Primary source collection. The primary sources of deep competitor information are humans with expert knowledge; ethical collection process includes the identification, contact, and interview of these individuals. Such collections range from phone interviews, formal meetings, and consulting assignments to brief discussions with competitor sales representatives at trade shows. The results of all primary collections are recorded on standard format reports (date, source, qualifications, response to task requirement, results, further sources suggested, references learned) for subsequent analysis.
  3. Analysis and production. Once indexed and organized, the corpus of data is analyzed to answer the questions posed by the initial tasks. Collected information is placed in a framework that includes organizational, financial, and product-service models that allow analysts to estimate the performance and operations of the competitor and predict likely strategies and planned activities. This process relies on a synoptic view of the organized information, experience, and judgment. SMEs may be called in from within FaxTech or from the outside (consultants) to support the analysis of data and synthesis of models.
  4. Reporting. Once approved by the CI unit manager, these quantitative models and more qualitative estimative judgments of competitor strategies are published for presentation in a secure portal or for formal presentation to management. As result of this reporting, management provides further refining direction and the cycle repeats.

9.3.4 The CI Unit Organizational Structure and Relationships

This manager accepts tasking from executive management, issues detailed tasks to the analytic team, and then reviews and approves results before release to management. The manager also manages the budget, secures consultants for collection or analysis support, manages special collections, and coordinates team training and special briefings by SMEs.

9.3.5 A Typical Operational Scenario

For each of the five processes, a number of use cases may be developed to describe specific actions that actors (CI team members or system components) perform to complete the process. In object-oriented design processes, the devel- opment of such use cases drives the design process by first describing the many ways in which actors interact to perform the business process [22]. A scenario or process thread provides a view of one completed sequence through a single or numerous use case(s) to complete an enterprise task. A typical crisis response scenario is summarized in Table 9.3 to illustrate the sequence of interactions between the actors (management, CI manager, deputy, knowledge-base man- ager and analysts, system, portal, and sources) to complete a quick response thread. The scenario can be further modeled by an activity diagram [23] that models the behavior between objects.

The development of the operational scenario also raises nonfunctional performance issues that are identified and defined, generally in parametric terms, for example:

  • Rate and volume of data ingested daily;
  • Total storage capacity of the on-line and offline archived holdings;
  • Access time for on-line and off-line holdings;
  • Number of concurrent analysts, searches, and portal users;
  • Information assurance requirements (access, confidentiality, and attack rejection).

9.3.6 CI System Abstraction

The purpose of use cases and narrative scenarios is to capture enterprise behavior and then to identify the classes of object-oriented design. The italicized text in the scenario identifies the actors, and the remaining nouns are candidates for objects (instantiated software classes). From these use cases, software designers can identify the objects of design, their attributes, and interactions. Based upon the use cases, object-oriented design proceeds to develop sequence diagrams that model messages passing between objects, state diagrams that model the dynamic behavior within each object, and object diagrams that model the static description of objects. The object encapsulates state attributes and provides services to manipulate the internal attributes

 

Based on the scenario of the last section, the enterprise designer defines the class diagram (Figure 9.7) that relates objects that accept the input CI requirements through the entire CI process to a summary of finished intelligence. This diagram does not include all objects; the objects presented illustrate those that acquire data related to specific competitors, and these objects are only a subset of the classes required to meet the full enterprise requirements defined earlier. (The objects in this are included in the analysis package described in the next section.) The requirement object accepts new CI requirements for a defined competitor; requirements are specified in terms of essential elements of information (EEI), financial data, SWOT characteristics, and organization structure. In this object, key intelligence topics may be selected from predefined templates to specify specific intelligence requirements for a competitor or for a marketplace event [24]. The analyst translates the requirements to tasks in the task object; the task object generates search and collect objects that specify the terms for automated search and human collection from primary sources, respectively. The results of these activities generate data objects that organize and present accumulated evidence that is related to the corresponding search and collect objects.

The analyst reviews the acquired data, creating text reports and completing analysis templates (SWOT, EEI, financial) in the analysis object. Analysis entries are linked to the appropriate competitor in the competitor list and to the supporting evidence in data objects. As results are accumulated in the templates, the status (e.g., percentage of required information in template completed) is computed and reported by the status object. Summary of current intelligence and status are rolled up in the summary object, which may be used to drive the CI portal.

9.3.7 System and Technical Architecture Descriptions

The abstractions that describe functions and data form the basis for partitioning packages of software services and the system hardware configuration. The system architecture description includes a network hardware view (Figure 9.8, top) and a comparable view of the packaged software objects (Figure 9.8, bottom)

The enterprise technical architecture is described by the standards for commercial and custom software packages (e.g., the commercial and developed software components with versions, as illustrated in Table 9.4) to meet the requirements developed in system model row of the matrix. Fuld & Company has published periodic reviews of software tools to support the CI process; these reviews provide a helpful evaluation of available commercial packages to support the CI enterprise [25]. The technical architecture is also described by the standards imposed on the implementing components—both software and hardware. These standards include general implementation standards [e.g., American National Standards Institute (ANSI), International Standards Organization (ISO), and Institute of Electrical and Electronics Engineers (IEEE)] and federal standards regulating workplace environments and protocols. The applicable standards are listed to identify applicability to various functions within the enterprise.

A technology roadmap should also be developed to project future transitions as new components are scheduled to be integrated and old components are retired. It is particularly important to plan for integration of new software releases and products to assure sustained functionality and compatibility across the enterprise.

10
Knowledge Management Technologies

IT has enabled the growth of organizational KM in business and government; it will continue to be the predominant influence on the progress in creating knowledge and foreknowledge within intelligence organizations.

10.1 Role of IT in KM

When we refer to technology, the application of science by the use of engineering principles to solve a practical problem, it is essential that we distinguish the difference between three categories of technologies that all contribute to our ability to create and disseminate knowledge (Table 10.1). We may view these as three technology layers, with the basic computing materials sciences providing the foundation technology applications for increasing complexity and scale of communications and computing.

10.4.1 Explicit Knowledge Combination Technologies

Future explicit knowledge combination technologies include those that trans- form explicit knowledge into useable forms and those that perform combination processes to create new knowledge.

  • Multimedia content-context tagged knowledge bases. Knowledgebase technology will support the storage of multimedia data (structured and unstructured) with tagging of both content and context to allow com- prehensive searches for knowledge across heterogeneous sources.
  • Multilingual natural language. Global natural language technologies will allow accurate indexing, tagging, search, linking, and reasoning about multilingual text (and recognized human speech at both the content level and the concept level. This technology will allow analysts to conduct multilingual searches by topic and concept at a global scale
  • Integrated deductive-inductive reasoning. Data-fusion and-data mining technologies will become integrated to allow interactive deductive and inductive reasoning for structured and unstructured (text) data sources. Data-fusion technology will develop level 2 (situation) and level 3 (impact, or explanation) capabilities using simulations to represent complex and dynamic situations for comparison with observed situations.
  • Purposeful deductive-inductive reasoning. Agent-based intelligence will coordinate inductive (learning and generalization) and deductive (decision and detection) reasoning processes (as well as abductive explanatory reasoning) across unstructured multilingual natural language, common sense, and structured knowledge bases. This reasoning will be goal-directed based upon agent awareness of purpose, values, goals, and beliefs.
  • Automated ontology creation. Agent-based intelligence will learn the structure of content and context, automatically populating knowledge bases under configuration management by humans.

 

10.4.3 Knowledge-Based Organization Technologies

Technologies that support the socialization processes of tacit knowledge exchange will enhance the performance and effectiveness of organizations; these technologies will increasingly integrate intelligence agents into the organization as aids, mentors, and ultimately as collaborating peers.

  • Tailored naturalistic collaboration. Collaboration technologies will provide environments with automated capabilities that will track the con- text of activities (speech, text, graphics) and manage the activity toward defined goals. These environments will also recognize and adapt to individual personality styles, tailoring the collaborative process (and the mix of agents-humans) to the diversity of the human-team composition.
  • Intimate tacit simulations. Simulation and game technologies will enable human analysts to be immersed in the virtual physical, symbolic, and cognitive environments they are tasked to understand. These technologies will allow users to explore data, information, and complex situations in all three domains of reality to gain tacit experience and to be able to share the experience with others.
  • Human-like agent partners. Multiagent system technologies will enable the formation of agent communities of practice and teams—and the creation of human-agent organizations. Such hybrid organizations will enable new analytic cultures and communities of problem-solving.
  • Combined human-agent learning. Personal agent tutors, mentors, and models will shadow their human partners, share experiences and observations, and show what they are learning. These agents will learn monitor subtle human cues about the capture and use of tacit knowledge in collaborative analytic processes.
  • Direct brain tacit knowledge. Direct brain biological-to-machine connections will allow monitors to provide awareness, tracking, articulation, and capture of tacit experiences to augment human cognitive performance.

10.5 Summary

KM technologies are built upon materials and ITs that enable the complex social (organizational) and cognitive processes of collaborative knowledge creation and dissemination to occur over large organizations, over massive scales of knowledge. Technologists, analysts, and developers of intelligence enterprises must monitor these fast-paced technology developments to continually reinvent the enterprise to remain competitive in the global competition for knowledge. This continual reinvention process requires a wise application of technology in three modes. The first mode is the direct adoption of technologies by upgrade and integration of COTS and GOTS products. This process requires the continual monitoring of industry standards, technologies, and the marketplace to project the lifecycle of products and forecast adoption transitions. The second application mode is adaptation, in which a commercial product component may be adapted for use by wrapping, modifying, and integrating with commercial or custom components to achieve a desired capability. The final mode is custom development of a technology unique to the intelligence application. Often, such technologies may be classified to protect the unique investment in, the capability of, and in some cases even the existence of the technology.

Technology is enabling, but it is not sufficient; intelligence organizations must also have the vision to apply these technologies while transforming the intelligence business in a rapidly changing world.

 

Notes on The Threat Closer to Home: Hugo Chavez and the War Against America

Michael Rowan is the author of The Threat Closer to Home: Hugo Chavez and the War Against America and is a political consultant for U.S. and Latin American leaders. He has advised former Bolivian president Jaime Paz Zamora and Costa Rican president Oscar Arias. Mr. Rowan has also counseled winning Democratic candidates in 30 U.S. states. He is a former president of the International Association of Political Consultants.

(2)

Hugo Chavez, the president of Venezuela, is a much more dangerous individuals than the famously elusive leader of al-Quaeda. He has made the United States his sworn enemy, and the sad truth is that few people are really listening.

“I’m still a subversive,” Chavez has admitted. “I think the entire world should be subverted.”

 

Hugo Chavez to Jan James of the Associated Press, September 23, 2007

 

 

(4)

 

One cannot discount how much Castro’s aura has shaped Chavez’s thoughts and actions.

 

(5)

 

There are many who harbor bad intentions towards the United States, but only a few who possess the capability to do anything about it. Chavez is one of these few because:

 

His de facto dictatorship gives him absolute control over Venezuela’s military, oil production, and treasury.

He harbors oil reserves second only to those of Saudi Arabia; Venezuela’s annual windfall profits exceed the net worth of Bill Gates.

He has a strategic military and oil alliance with a major American foe and terrorism sponsor, the Islamic Republic of Iran

He has more soldiers on active and reserve duty and more modern weapons – mostly from Russia and China – than any other nation in Latin America

Fulfilling Castro’s dream, he has funded a Communist insurgency against the United States, effectively annexing Bolivia, Nicaragua, Dominica, and Ecuador as surrogate states, and is developing cells in dozens of countries to create new fronts in this struggle.

He is allied with the narcotics-financed guerrillas against the government of Colombia, which the United States supports in its war against drug trafficking

He has numerous assocaiions with terorrists, money launderers, kidnappers, and drug traffickers.

He has more hard assets (the Citgo oil company) and soft assets (Hollywood stars, politicians, lobbyists, and media connections) than any other foreign power.

 

 

(6)

 

Chavez longs for the ear when there will be no liberal international order to constrain his dream of a worldwide “socialist” revolution: no World Bank, no International Monetary Fund, no Organization for Economic Cooperation and Development, no World Trade Organization, no international law, not economic necessity for modernization and globalilzation. And perhaps more important, he longs for the day when the United States no longer policies the world’s playing fields. Chavez has spent more than $100 billion trying to minimize the impact of each international institution on Latin America. He is clearly opposed to international cooperation that does not endorse the Cuba-Venezuela government philosophy.

 

(10)

 

According to reports from among its 2,400 former members, the FARC resembles a mafia crime gang more than a Communist guerrilla army, but Chavez disagrees, calling the FARC, “insurgent forces that have a political project.” They “are not terrorists, they are true armies… they must be recognized.”

 

(11)

 

Chavez’s goal in life are to complete Simon Bolivar’s dream to united Latin America and Castro’s dream to communize it.

 

(13)

 

Since he was elected, Chavez’s public relations machinery has spent close to a billion dollars in the United States to convince Americas that he alone is telling the true story.

 

(14)

 

There are a number of influential Americans who have been attracted by Chavez’s money. These influde the 1996 Republican vice-presidential candidate Jack Kemp, who has repaed large dees trying to sell Chavez’s oil to the U.S. government; Tom Boggs, one of the most powerful lobbyists in Washington D.C.; Giuliani Partners, the lobbying arms of the former New York mayor and presidential hopeful (principal lobbyists for Chavez’s CITGO oil company in Texas); former Massachusetts governor Mitt Romney’s Bain Associates, which prospered by handling Chavez’s oil and bond interests; and Joseph P. Kennedy II of Massachusetts, who advertises Chavez’s oil discounts to low-income Americans, a program that reaches more than a million American families (Kennedy and Chavez cast this program as nonpolitical philanthropy).

 

(19)

 

Chavez’s schoolteacher parents could not afford to raise all of their six children at home, so the two older boys, Adan and Hugo, were sent to live with their grandmother, Rosa Ines. Several distinguished Chavez-watchers, including Alvaro Vargas Llosa, have theorized that his being locked in cloastes at home and then sent away by his parents to grow up elsewhere constituted a seminal rejection that gave rise to what Vargas Llosa called Chavez’s “messianic inferiority complex” – his overarching yearning to be loved and his irrepressible need to act out.

(26)

Chavez began living the life of a Communist double agent. “During the day I’m a career military officer who does his job,” he told his lover Herma Marksman, “but at night I work on achieving the transformations this country needs.” His nights were filled with secret meetings of Communist subversives and co-conspirators, often in disguises, planning the armed overthrow of the government.

 

(27)

 

In 1979, he was transferred to Caracas to teach at his former military academic. It was the perfect perch from which to build a network of officers sympathetic to his revolutionary cause.

Chavez also expanded the circle of his ideological mentors. By far the most important of these was Douglas Bravo, an unreconstructed communist who disobeyed Moscow’s orders after détente to give up the armed struggle against the United States. Bravo was the leader of the Party of the Venezuelan Revolution (PVR) and the Armed Forces of National Liberation. Chavez actively recruited his military friends to the PVR, couching it in the rhetoric of Bolivarianism to make it more palatable to their sensibilities.

 

(32)

 

From 1981 to 1984, a determined Chavez began secretly converting his students at the military academy to co-conspirators; ironically his day job was to teach Venezuelan military history with an emphasis on promoting military professionalism and noninvolvement in politics.

 

(45)

 

Chavez emerged from jail in 1994 a hero to Venezuela’s poor. He had also, while imprisoned, assiduously courted the international left, who helped him build an impressive war-chest – including, it was recently revealed, $150,000 from the FARC guerrillas of Colombia.

 

(46)

 

John Maisto, the US ambassador to Venezuela, at one point called Chavez a “terrorist” because of his coup attempt and denied him a visa to visit the United States. In reply, Chavez mocked Maisto by taking his Visa credit card from his wallet and waiving it about, saying, “I already have a Visa!”

 

(48)

 

Corruption made a good campaign issue for Chavez, but when it came time to do something about it, he balked. Chavez initially appointed Jesus Urdaneta – one of the four saman tree oath takers – as anticorruption czar. But Urdaneta was too energetic and effective for the President, within five months he had identified forty cases of corruption within Chavez’s own administration. Chavez refused to back his czar, who was eventually pushed out of office by the very people he was investigating. Chavez did nothing to save him.

 

In 1999 Chavez started a give-away project called “Plan Bolivar 2000.” Implemented by Chavez loyalists organized in groups known as Bolivarian Circles, the project was modeled after the Communist bloc committees in Castro’s Cuba The plan was basically a social welfare program that mirrored the populist ethic…. In eighteen months, Bolivar 2000 had become so corrupt that it had to be disbanded.

 

(49)

 

Independent studies estimate that the amounts taken from Venezuelan poverty and development funds by middlemen, brokers, and subcontractors – all of whom charge an “administrative” cost for passing on the funds – range as high as 80 percent to 90 percent. By contrast, the U.S. government, the World Bank, nongovernmental organizations, and international charities limit their administrative costs to 20 percent of project funds; the Nobal Peace Prize winning Doctors without Borders, for example, spends only 16 percent on administration.

 

(52)

 

Between 1999 and 2009, Chavez has spent some 20,000 hours on television.

 

(69)

 

Hugo Chavez is implementing a sophisticated oil war against the United Sates. To understand this you have to look back to 1999, when he asked the Venezuelan Congress for emergency executive powers and got them, whereupon he consolidated government power to his advantage. His big move was to take full control over the national oil company PDVSA. Chavez replaced PDVSA’s directors and managers with military or political loyalists, many of whom knew little to nothing about the oil business. This action rankled the company’s professional and technical employees – some 50,000 of them – who enjoyed the only true meritocracy in the country. Citgo…. Later received similar treatment.

 

Chavez in effect demodernized and de-Americanized PDVAS, which had adopted organizational efficiency cultures similar to its predecessors ExxonMobil and Shell, by claiming that they were ideologically incorrect. Chavez compared this to Haiti’s elimination of French culture under Toussain L’Ouverture in the early 1800s.

 

The president’s effort to dumb down the business was evident early on. In 1999 Chavez fired Science Applications International Corporations (Known as SAIC), an enormous U.S.-based global information technology firm that had served as PDVSA’s back office since 1995 (as it had for British Petroleum and other energy companies).

 

SAIC appealed to an international court and got a judgement against Chavez for stealing SAIC’s knowledge without compensation. Chavez ignored the judgement, refusing to pay “one penny”.

 

Stripped of SAIC technology and thousands of oil professionals who quit out of frustration, PDVSA steadily lost operational capacity from 1999 to 2001. Well maintenance suffered; production investment was slashed, oil productivity declined; environmental standards were ignored; and safety accidents proliferated. After the 2002 stroke that led to Chavez’s brief removal from power, PDVSA sacked some 18,000 more of it’s knowledge workers. Its production fell to 2.4 million barrels per day.

 

(68)

 

After Venezuela’s 2006 presidential election, Chavez…told three American oil companies – ExxonMobil, ConocoPhillips, and Chevron – to turn over 60 percent of their heavy oil exploration [which they had spent a decade and nearly $20 billion developing] or leave Venezuela.

 

(72)

 

Oil has caused a massive shift in the wealth of nations. All told, $12 trillion has been transferred from the oil consumers to the oil producers since 2002. This is a very large figure – it is comparable to the 2006 GDP of the United States – and it has contributed greatly to our unprecedented trade deficit; a weakening of the dollar; and the weakness of the U.S. financial system in surviving the housing mortgage crisis.

 

Two decades ago, private companies controlled half the world’s oil reserved, but today they only control 13 percent… While many Americans believe that big oil is behind the high prices at the gas pump, the fact is that the national oil companies controlled by Chavez of Venezuela, Ahmadinejad of Iran, and Putin of Russia are the real culprits.

 

(73)

 

When Chavez’s plane first landed in Havana in 1994, Fidel Castro greeted him at the airport. What made Hugo Chavez important to Castro then was the same thing that makes him important to the United States now: oil. Castro’s plan to weaken America – which he had to shelve when the Soviet Union collapsed and Cuba lost its USSR oil and financial subsidy – was dusted off.

The Chavez Castro condominium was a two-way street. Chavez soon began delivering from 50,000 to 90,000 barrels of oil per day to Castro, a subsidy eventually worth $3 billion to $4 billion per year, which far exceeded the sugar subsidy Castro once received from the Soviet Union until Gorbachev ended it around 1980. Castro used the huge infusion of Chavez’s cash to solidify his absolute control in Cuba and to crack down on political dissidents.

 

 

(79)

 

Chavez’s predatory, undemocratic, and destabilizing actions are not limited to Venezuela.

 

Chavez is striving to remake Latin America in his own image, and for his own purposes – purposes that mirror Fidel Castro’s half-aborted but never abandoned plans for hemispheric revolution hatched half a decade ago.

 

(81)

 

Hugo Chavez sees himself as leading the revolutionary charge that Fidel Castro always wanted to mount but was never able to spread beyond the shores of the island prison he created in the Caribbean. Ye four decades after taking power, Castro found a surrogate, a right arm who could carry on the work that he could not.

 

(82)

 

[Chavez] routinely uses oil to bribe Latin American states into lining up against the United States, either by subsidizing oil in the surrogate state or by using oil to interfere in other countries’ elections.

 

For instance, in 1999 Chavez created Petrocaribe, a company that provides oil discounts with delayed payments to thirteen Caribbean nations. It was so successful at fulfilling it’s real purpose – buying influence and loyalty – that two years later Chavez created PetroSur, which does the same for twenty Central and South American nations, at an annual cost to Venezuela’s treasury of an estimated $1 billion.

 

(83)

 

From 2005 to 2007 alone, Chavez gave away a total of $39 billion in oil and cash; $9.9 billion to Argentina, $7.5 billion to Cuba, $4.9 billion to Ecuador, and $4.9 billion from Nicaragua were the largest sums Chavez gave…

 

At a time when U.S. influence is waning – in part owing to Washington’s preoccupation with Iraq and the Middle East – Chavez has filled the void. The United States provides less than $1 billion in foreign economic aid to the entire region, a figure that rises to only $1.6 billion in foreign economic aid to the entire region… Chavez, meanwhile, spends nearly $9 billion in the region every single year. And his money is always welcome because it comes with no strings. The World Bank and IMF, by contrast, require concomitant reforms – for instance, efforts to fight corruption, drug trafficking, and money laundering – in return for grants and loans.

Consequently, over the course of a handful of years, virtually all the Latin American countries have wound up dependent on Venezuela’s oil or money or both. These include not just resource-poor nations; in Latin America only Mexico and Peru are fully independent of Chavez’s money.

One consequence: at the Organization of American States (OAS), which serves as a mini-United Nations for Latin America, Venezuela has assumed the position of the “veto” vote that once belonged to the United States.

 

(84)

Since Chavez has been president of Venezuela, the OAS has not passed on substantive resolution supported by the United States when Chavez was on the opposite side.

In all, since coming to power in 1999, Chavez has spent or committed an estimated $110 billion – some say twice the amount needed to eliminate poverty in Venezuela forever – in more than thirty countries to advance his anti-American agenda. Since 2005, Chavez’s total foreign aid budget for Latin America has been more than $50 billion – much more than the amount of U.S. foreign aid for the region over the same period.

Many of these expenditures have been hidden from the Venezuelan public in secret off-budget slush funds.The result is that Chavez now, by any measure, the most powerful figure in Latin America.

(85)

During Morale’s first year in office, 2006, Chavez contributed a whopping $1 billion in aid to Bolivia (equivalent to 12 percent of the country’s GDP). He also provided access to one of Venezuela’s presidential jets, sent a forty-soldier personal guard to accompany Morales at all times, subsidized the pay of Bolivia’s military, and paid to send thousands of Cuban doctors to Bolivia’s barrio health clinics.

(86)

After his political success in Bolivia, Chavez has aggressively supported every anti-American presidential candidate in the region. U.S. policymakers console themselves by claiming that Chavez’s favorites have mostly been defeated by pro-American centrists. The truth is more complex. Chavez came close to winning every one of those contests, and lost only when he overplayed his hand. More troubling, U.S. influence and prestige in Latin America is at perhaps its lowest ebb ever; today, being considered America’s ally is the political kiss of death.

 

(91)

 

Since turning unabashedly criminal, the FARC has imported arms, exported drugs, recruited minors, kidnapped thousands for ransom, executed hostages, hijacked planes, planted land mines, operated an extortion and protection racket in peasant communities, committed atrocities against innocent civilians, and massacred farmers as traitors…

 

A long-held ambition of the FARC’s leadership is to have the group officially recognized as a belligerent force, a legitimate army in rebellion. Such a designation – conferred by individual nations and under international law – would give the FARC rights normally accorded only to sovereign powers.

(93)

Uribe, a calm and soft-spoken attorney, set out methodologically to finish what Pastrana had begun.

 

To Chavez, any friend of the United States is his enemy, and any enemy of a friend of the United States is his friend – even a terrorist organization working to destabilize one of his country’s most important neighbors.

 

(94)

The relationship [between Chavez and the FARC] began more than a decade and a half ago, in the wake of Chavez’s failed coup. In 1992, the FARC gave a jailed Chavez $150,000, money that launched him to the presidency.

(95)

Perhaps the most sinister aspect to Chavez’s relationship with the FARC is the help he has provided to maximize its cocaine sales to the United States and Europe. British journalist John Carlin, who writes for The Guardian, a newspaper generally supportive of Chavez, secured interviews with several of the 2,400 FARC guerrillas who deserted the group in 2007. One of his subject told him that “the guerillas have a non-aggression pact with the Venezuelan military. The Venezuelan government lets FARC operate freely because they share the same left-wing, Bolivarian ideals, and because FARC bribes their people. Without cocaine revenues, the FARC would disappear, its former members assert. “If it were not for cocaine, the fuel that feeds the Colombian war, FARC would long ago have disbanded.”

(104)

Iran and Venezuela are working together to drive up the price of oil in hopes of crippling the American economy and enhancing their hegemonies in the Middle East and Latin America. They are using their windfall petro-revenues to finance a simmering war – sometimes cold, sometimes hot, sometimes covert, sometimes overt – against the United States.

(105)

As Chavez told Venezuelans repeatedly, Saddam’s fate was also what he feared for himself.

 

(119)

Hugo Chavez’s first reaction after the attack on the camp of narcoterrorist Raul Reyes was to accuse Colombia of behaving like Israel. “We’re not going to allow an Israel in the region,” he said.

 

Actually the parallel is not far off. Like Colombia, Israel is a state that wishes to live in peace with its neighbors, but they insist on destroying it. Israel’s fondest wish would be for the Palestinians to be capable of building a peaceful and prosperous nation with which Israel could establish normal relations.

 

(123)

American officials have also submitted some 130 written requests for basic biographical or immigration-related information, such as entry and exit dates into and out of Venezuela, for suspected terrorists. Not one of the requests has generated a substantive response.

(126)

***

 

Michael Rowan talked about the book he co-wrote, The Threat Closer to Home: Hugo Chavez and the War Against America, on C-SPAN. Former U.S. Ambassador to Venezuela Otto Reich joined him to comment on the book. Ray Walser moderated. Discussion topics included the global geopolitical impact of Venezuela’s decreasing economic and personal freedoms and what the U.S. can do. Then both men responded to questions from members of the audience.

Notes on Intelligence Analysis: A Target-Centric Approach

A major contribution of the 9/11 Commission and the Iraqi WMD Commission was their focus on a failed process, specifically on that part of the process where intelligence analysts interact with their policy customers.

“Thus, this book has two objectives:

The first objective is to redefine the intelligence process to help make all parts of what is commonly referred to as the “intelligence cycle” run smoothly and effectively, with special emphasis on both the analyst-collector and the analyst-customer relationships.

The second goal is to describe some methodologies that make for better predictive analysis.”

 

“An intelligence process should accomplish three basic tasks. First, it should make it easy for customers to ask questions. Second, it should use the existing base of intelligence information to provide immediate responses to the customer. Third, it should manage the expeditious creation of new information to answer remaining questions. To do these things, intelligence must be collaborative and predictive: collaborative to engage all participants while making it easy for customers to get answers; predictive because intelligence customers above all else want to know what will happen next.”

“the target-centric process outlines a collaborative approach for intelligence collectors, analysts, and customers to operate cohesively against increasingly complex opponents. We cannot simply provide more intelligence to customers; they already have more information than they can process, and information overload encourages intelligence failures. The community must provide what is called “actionable intelligence”—intelligence that is relevant to customer needs, is accepted, and is used in forming policy and in conducting operations.”

“The second objective is to clarify and refine the analysis process by drawing on existing prediction methodologies. These include the analytic tools used in organizational planning and problem solving, science and engineering, law, and economics. In many cases, these are tools and techniques that have endured despite dramatic changes in information technology over the past fifty years. All can be useful in making intelligence predictions, even in seemingly unrelated fields.”

“This book, rather, is a general guide, with references to lead the reader to more in-depth studies and reports on specific topics or techniques. The book offers insights that intelligence customers and analysts alike need in order to become more proactive in the changing world of intelligence and to extract more useful intelligence.”

“The common theme of these and many other intelligence failures discussed in this book is not the failure to collect intelligence. In each of these cases, the intelligence had been collected. Three themes are common in intelligence failures: failure to share information, failure to analyze collected material objectively, and failure of the customer to act on intelligence.”

 

“ though progress has been made in the past decade, the root causes for the failure to share remain, in the U.S. intelligence community as well as in almost all intelligence services worldwide:

Sharing requires openness. But any organization that requires secrecy to perform its duties will struggle with and often reject openness. Most governmental intelligence organizations, including the U.S. intelligence community, place more emphasis on secrecy than on effectiveness. The penalty for producing poor intelligence usually is modest. The penalty for improperly handling classified information can be career-ending. There are legitimate reasons not to share; the U.S. intelligence community has lost many collection assets because details about them were too widely shared. So it comes down to a balancing act between protecting assets and acting effectively in the world. ”

 

“Experts on any subject have an information advantage, and they tend to use that advantage to serve their own agendas. Collectors and analysts are no different. At lower levels in the organization, hoarding information may have job security benefits. At senior levels, unique knowledge may help protect the organizational budget. ”

 

“Finally, both collectors of intelligence and analysts find it easy to be insular. They are disinclined to draw on resources outside their own organizations.12 Communication takes time and effort. It has long-term payoffs in access to intelligence from other sources, but few short-term benefits.”

 

Failure to Analyze Collected Material Objectively

In each of the cases cited at the beginning of this introduction, intelligence analysts or national leaders were locked into a mindset—a consistent thread in analytic failures. Falling into the trap that Louis Pasteur warned about in the observation that begins this chapter, they believed because, consciously or unconsciously, they wished it to be so. ”

 

 

 

  • Ethnocentric bias involves projecting one’s own cultural beliefs and expectations on others. It leads to the creation of a “mirror-image” model, which looks at others as one looks at oneself, and to the assumption that others will act “rationally” as rationality is defined in one’s own culture.”
  • Wishful thinking involves excessive optimism or avoiding unpleasant choices in analysis.
  • Parochial interests cause organizational loyalties or personal agendas to affect the analysis process.
  • Status quo biases cause analysts to assume that events will proceed along a straight line. The safest weather prediction, after all, is that tomorrow’s weather will be like today’s.
  • Premature closure results when analysts make early judgments about the answer to a question and then, often because of ego, defend the initial judgments tenaciously. This can lead the analyst to select (usually without conscious awareness) subsequent evidence that supports the favored answer and to reject (or dismiss as unimportant) evidence that conflicts with it.

 

Summary

 

Intelligence, when supporting policy or operations, is always concerned with a target. Traditionally, intelligence has been described as a cycle: a process starting from requirements, to planning or direction, collection, processing, analysis and production, dissemination, and then back to requirements. That traditional view has several shortcomings. It separates the customer from the process and intelligence professionals from one another. A gap exists in practice between dissemination and requirements. The traditional cycle is useful for describing structure and function and serves as a convenient framework for organizing and managing a large intelligence community. But it does not describe how the process works or should work.”

 

 

 

Intelligence is in practice a nonlinear and target-centric process, operated by a collaborative team of analysts, collectors, and customers collectively focused on the intelligence target. The rapid advances in information technology have enabled this transition.

All significant intelligence targets of this target-centric process are complex systems in that they are nonlinear, dynamic, and evolving. As such, they can almost always be represented structurally as dynamic networks—opposing networks that constantly change with time. In dealing with opposing networks, the intelligence network must be highly collaborative.

 

“Historically, however, large intelligence organizations, such as those in the United States, provide disincentives to collaboration. If those disincentives can be removed, U.S. intelligence will increasingly resemble the most advanced business intelligence organizations in being both target-centric and network-centric.”

 

 

“Having defined the target, the first question to address is, What do we need to learn about the target that our customers do not already know? This is the intelligence problem, and for complex targets, the associated intelligence issues are also complex. ”

 

 

 

 

 

 

 

Chapter 4

Defining the Intelligence Issue

A problem well stated is a problem half solved.

Inventor Charles Franklin Kettering

“all intelligence analysis efforts start with some form of problem definition.”

“The initial guidance that customers give analysts about an issue, however, almost always is incomplete, and it may even be unintentionally misleading.”

“Therefore, the first and most important step an analyst can take is to understand the issue in detail. He or she must determine why the intelligence analysis is being requested and what decisions the results will support. The success of analysis depends on an accurate issue definition. As one senior policy customer noted in commenting on intelligence failures, “Sometimes, what they [the intelligence officers] think is important is not, and what they think is not important, is.”

 

“The poorly defined issue is so common that it has a name: the framing effect. It has been described as “the tendency to accept problems as they are presented, even when a logically equivalent reformulation would lead to diverse lines of inquiry not prompted by the original formulation.”

 

 

“veteran analysts go about the analysis process quite differently than do novices. At the beginning of a task, novices tend to attempt to solve the perceived customer problem immediately. Veteran analysts spend more time thinking about it to avoid the framing effect. They use their knowledge of previous cases as context for creating mental models to give them a head start in addressing the problem. Veterans also are better able to recognize when they lack the necessary information to solve a problem,6 in part because they spend enough time at the beginning, in the problem definition phase. In the case of the complex problems discussed in this chapter, issue definition should be a large part of an analyst’s work.

Issue definition is the first step in a process known as structured argumentation.”

 

 

“structured argumentation always starts by breaking down a problem into parts so that each part can be examined systematically.”

 

Statement of the Issue

 

In the world of scientific research, the guidelines for problem definition are that the problem should have “a reasonable expectation of results, believing that someone will care about your results and that others will be able to build upon them, and ensuring that the problem is indeed open and underexplored.”8 Intelligence analysts should have similar goals in their profession. But this list represents just a starting point. Defining an intelligence analysis issue begins with answering five questions:

 

When is the result needed? Determine when the product must be delivered. (Usually, the customer wants the report yesterday.) In the traditional intelligence process, many reports are delivered too late—long after the decisions have been made that generated the need—in part because the customer is isolated from the intelligence process… The target-centric approach can dramatically cut the time required to get actionable intelligence to the customer because the customer is part of the process.”

 

Who is the customer? Identify the intelligence customers and try to understand their needs. The traditional process of communicating needs typically involves several intermediaries, and the needs inevitably become distorted as they move through the communications channels.

 

What is the purpose? Intelligence efforts usually have one main purpose. This purpose should be clear to all participants when the effort begins and also should be clear to the customer in the result…Customer involvement helps to make the purpose clear to the analyst.”

 

 

What form of output, or product, does the customer want? Written reports (now in electronic form) are standard in the intelligence business because they endure and can be distributed widely. When the result goes to a single customer or is extremely sensitive, a verbal briefing may be the form of output.”

 

“Studies have shown that customers never read most written intelligence. Subordinates may read and interpret the report, but the message tends to be distorted as a result. So briefings or (ideally) constant customer interaction with the intelligence team during the target-centric process helps to get the message through.”

 

What are the real questions? Obtain as much background knowledge as possible about the problem behind the questions the customer asks, and understand how the answers will affect organizational decisions. The purpose of this step is to narrow the problem definition. A vaguely worded request for information is usually misleading, and the result will almost never be what the requester wanted.”

 

Be particularly wary of a request that has come through several “nodes” in the organization. The layers of an organization, especially those of an intelligence bureaucracy, will sometimes “load” a request as it passes through with additional guidance that may have no relevance to the original customer’s interests. A question that travels through several such layers often becomes cumbersome by the time it reaches the analyst.

 

“The request should be specific and stripped of unwanted excess. ”

 

“The time spent focusing the request saves time later during collection and analysis. It also makes clear what questions the customer does not want answered—and that should set off alarm bells, as the next example illustrates.”

 

“After answering these five questions, the analyst will have some form of problem statement. On large (multiweek) intelligence projects, this statement will itself be a formal product. The issue definition product helps explain the real questions and related issues. Once it is done, the analyst will be able to focus more easily on answering the questions that the customer wants answered.”

 

The Issue Definition Product

 

“When the final intelligence product is to be a written report, the issue definition product is usually in précis (summary, abstract, or terms of reference) form. The précis should include the problem definition or question, notional results or conclusions, and assumptions. For large projects, many intelligence organizations require the creation of a concept paper or outline that provides the stakeholders with agreed terms of reference in précis form.”

 

“Whether the précis approach or the notional briefing is used, the issue definition should conclude with an issue decomposition view.”

 

Issue Decomposition

 

“taking a seemingly intractable problem and breaking it into a series of manageable subproblems.”

 

 

“Glenn Kent of RAND Corporation uses the name strategies-to-task for a similar breakout of U.S. Defense Department problems.12 Within the U.S. intelligence community, it is sometimes referred to as problem decomposition or “decomposition and visualization.”

 

 

 

“Whatever the name, the process is simple: Deconstruct the highest level abstraction of the issue into its lower-level constituent functions until you arrive at the lowest level of tasks that are to be performed or subissues to be dealt with. In intelligence, the deconstruction typically details issues to be addressed or questions to be answered. Start from the problem definition statement and provide more specific details about the problem.”

 

“Start from the problem definition statement and provide more specific details about the problem. The process defines intelligence needs from the top level to the specific task level via taxonomy—a classification system in which objects are arranged into natural or related groups based on some factor common to each object in the group. ”

 

“At the top level, the taxonomy reflects the policymaker’s or decision maker’s view and reflects the priorities of that customer. At the task level, the taxonomy reflects the view of the collection and analysis team. These subtasks are sometimes called key intelligence questions (KIQs) or essential elements of information (EEIs).”

 

“Issue decomposition follows the classic method for problem solving. It results in a requirements, or needs, hierarchy that is widely used in intelligence organizations. ”

 

it is difficult to evaluate how well an intelligence organization is answering the question, “What is the political situation in Region X?” It is much easier to evaluate the intelligence unit’s performance in researching the transparency, honesty, and legitimacy of elections, because these are very specific issues.

 

“Obviously there can be several different issues associated with a given intelligence target or several different targets associated with a given issue.”

 

Complex Issue Decomposition

 

We have learned that the most important step in the intelligence process is to understand the issue accurately and in detail. Equally true, however, is that intelligence problems today are increasingly complex—often described as nonlinear, or “wicked.” They are dynamic and evolving, and thus their solutions are, too. This makes them difficult to deal with—and almost impossible to address within the traditional intelligence cycle framework. A typical example of a wicked issue is that of a drug cartel—the cartel itself is dynamic and evolving and so are the questions being posed by intelligence consumers who have an interest in it.”

 

 

 

 

 

 

 

 

“A typical real-world customer’s issue today presents an intelligence officer with the following challenges:

 

It represents an evolving set of interlocking issues and constraints.

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved.”

 

“There are many stakeholders—people who care about or have something at stake in how the issue is resolved. (Again, this makes the problem-solving process a fundamentally social one, in contrast to the antisocial traditional intelligence cycle.) ”

 

The constraints on the solution, such as limited resources and political ramifications, change over time. The target is constantly changing, as the Escobar example illustrates, and the customers (stakeholders) change their minds, fail to communicate, or otherwise change the rules of the game.”

 

Because there is no final issue definition, there is no definitive solution. The intelligence process often ends when time runs out, and the customer must act on the most currently available information.”

 

“Harvard professor David S. Landes summarized these challenges nicely when he wrote, “The determinants of complex processes are invariably plural and interrelated.”15 Because of this—because complex or wicked problems are an evolving set of interlocking issues and constraints, and because the introduction of new constraints cannot be prevented—the decomposition of a complex problem must be dynamic; it will change with time and circumstances. ”

 

 

“As intelligence customers learn more about the targets, their needs and interests will shift.

Ideally, a complex issue decomposition should be created as a network because of the interrelationship among the elements.

 

 

Although the hierarchical decomposition approach may be less than ideal for complex problems, it works well enough if it is constantly reviewed and revised during the analysis process. It allows analysts to define the issue in sufficient detail and with sufficient accuracy so that the rest of the process remains relevant. There may be redundancy in a linear hierarchy, but the human mind can usually recognize and deal with the redundancy. To keep the decomposition manageable, analysts should continue to use the hierarchy, recognizing the need for frequent revisions, until information technology comes up with a better way.

 

 

 

Structured Analytic Methodologies for Issue Definition

 

Throughout the book we discuss a class of analytic methodologies that are collectively referred to as structured analytic methodologies or SATs. ”

 

 

“a relevancy check needs to be done. To be “key,” an assumption must be essential to the analytic reasoning that follows it. That is, if the assumption turns out to be invalid, then the conclusions also probably are invalid. CIA’s Tradecraft Primer identifies several questions that need to be asked about key assumptions:

 

How much confidence exists that this assumption is correct?

What explains the degree of confidence in the assumption?

What circumstances or information might undermine this assumption?

Is a key assumption more likely a key uncertainty or key factor?

Could the assumption have been true in the past but less so now?

If the assumption proves to be wrong, would it significantly alter the analytic line? How?

Has this process identified new factors that need further analysis?”

 

Example: Defining the Counterintelligence Issue

 

Counterintelligence (CI) in government usually is thought of as having two subordinate problems: security (protecting sources and methods) and catching spies (counterespionage).

 

 

If the issue is defined this way—security and counterespionage—the response in both policy and operations is defensive. Personnel background security investigations are conducted. Annual financial statements are required of all employees. Profiling is used to detect unusual patterns of computer use that might indicate computer espionage. Cipher-protected doors, badges, personal identification numbers, and passwords are used to ensure that only authorized persons have access to sensitive intelligence. The focus of communications security is on denial, typically by encryption. Leaks of intelligence are investigated to identify their source.

 

But whereas the focus on security and counterespionage is basically defensive, the first rule of strategic conflict is that the offense always wins. So, for intelligence purposes, you’re starting out on the wrong path if the issue decomposition starts with managing security and catching spies.

 

A better issue definition approach starts by considering the real target of counterintelligence: the opponent’s intelligence organization. Good counterintelligence requires good analysis of the hostile intelligence services. As we will see in several examples later in this book, if you can model an opponent’s intelligence system, you can defeat it. So we start with the target as the core of the problem and begin an issue decomposition.

 

If the counterintelligence issue is defined in this fashion, then the counterintelligence response will be forward-leaning and will focus on managing foreign intelligence perceptions through a combination of covert action, denial, and deception. The best way to win the CI conflict is to go on the offensive (model the target, anticipate the opponent’s actions, and defeat him or her). Instead of denying information to the opposing side’s intelligence machine, for example, you feed it false information that eventually degrades the leadership’s confidence in its intelligence services.

 

To do this, one needs a model of the opponent’s intelligence system that can be subjected to target-centric analysis, including its communications channels and nodes, its requirements and targets, and its preferred sources of intelligence.

 

Summary

Before beginning intelligence analysis, the analyst must understand the customer’s issue. This usually involves close interaction with the customer until the important issues are identified. The problem then has to be deconstructed in an issue decomposition process so that collection, synthesis, and analysis can be effective.”

 

All significant intelligence issues, however, are complex and nonlinear. The complex problem is a dynamic set of interlocking issues and constraints with many stakeholders and no definitive solution. Although the linear issue decomposition process is not an optimal way to approach such problems, it can work if it is reviewed and updated frequently during the analysis process.

 

 

“Issue definition is the first step in a process known as structured argumentation. As an analyst works through this process, he or she collects and evaluates relevant information, fitting it into a target model (which may or may not look like the issue decomposition); this part of the process is discussed in chapters 5–7. The analyst identifies information gaps in the target model and plans strategies to fill them. The analysis of the target model then provides answers to the questions posed in the issue definition process. The next chapter discusses the concept of a model and how it is analyzed.”

 

 

 

 

 

 

 

 

Chapter 5

Conceptual Frameworks for Intelligence Analysis

 

“If we are to think seriously about the world, and act effectively in it, some sort of simplified map of reality . . . is necessary.”

Samuel P. Huntington, The Clash of Civilizations and the Remaking of World Order

 

 

“Balance of power,” for example, was an important conceptual framework used by policymakers during the Cold War. A different conceptual framework has been proposed for assessing the influence that one country can exercise over another.”

 

Analytic Perspectives—PMESII

 

In chapter 2, we discussed the instruments of national power—an actions view that defines the diplomatic, information, military, and economic (DIME) actions that executives, policymakers, and military or law enforcement officers can take to deal with a situation.

 

The customer of intelligence may have those four “levers” that can be pulled, but intelligence must be concerned with the effects of pulling those levers. Viewed from an effects perspective, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII.

 

Political. Describes the distribution of responsibility and power at all levels of governance—formally constituted authorities, as well as informal or covert political powers. (Who are the tribal leaders in the village? Which political leaders have popular support? Who exercises decision-making or veto power in a government, insurgent group, commercial entity, or criminal enterprise?)

 

Military. Explores the military and/or paramilitary capabilities or other ability to exercise force of all relevant actors (enemy, friendly, and neutral) in a given region or for a given issue. (What is the force structure of the opponent? What weaponry does the insurgent group possess? What is the accuracy of the rockets that Hamas intends to use against Israel? What enforcement mechanisms are drug cartels using to protect their territories?)

 

Economic. Encompasses individual and group behaviors related to producing, distributing, and consuming resources. (What is the unemployment rate? Which banks are supporting funds laundering? What are Egypt’s financial reserves? What are the profit margins in the heroin trade?)

 

Social. Describes the cultural, religious, and ethnic makeup within an area and the beliefs, values, customs, and behaviors of society members. (What is the ethnic composition of Nigeria? What religious factions exist there? What key issues unite or divide the population?)

Infrastructure. Details the composition of the basic facilities, services, and installations needed for the functioning of a community, business enterprise, or society in an area. (What are the key modes of transportation? Where are the electric power substations? Which roads are critical for food supplies?)

 

Information. Explains the nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information. (How much access does the local population have to news media or the Internet? What are the cyber attack and defense capabilities of the Saudi government? How effective would attack ads be in Japanese elections?)

 

The typical intelligence problem seldom must deal with only one of these factors or systems. Complex issues are likely to involve them all. The events of the Arab Spring in 2011, the Syrian uprising that began that year, and the Ukrainian crisis of 2014 involved all of the PMESII factors. But PMESII is also relevant in issues that are not necessarily international. Law enforcement must deal with them all (in this case, “military” refers to the use of violence or armed force by criminal elements).

 

Modeling the Intelligence Target

 

Models are used so extensively in intelligence that analysts seldom give them much thought, even as they use them.

 

The model paradigm is a powerful tool in many disciplines.

 

“Former national intelligence officer Paul Pillar described them as “guiding images” that policymakers rely on in making decisions. We’ve discussed one guiding image—that of the PMESII concept. The second guiding image—that of a map, theory, concept, or paradigm—in this book is merged into a single entity called a model.Or, as the CIA’s Tradecraft Primer puts it succinctly:

 

“all individuals assimilate and evaluate information through the medium of “mental models…”

 

Modeling is usually thought of as being quantitative and using computers. However, all models start in the human mind. Modeling does not always require a computer, and many useful models exist only on paper. Models are used widely in fields such as operations research and systems analysis. With modeling, one can analyze, design, and operate complex systems. One can use simulation models to evaluate real-world processes that are too complex to analyze with spreadsheets or flowcharts (which are themselves models, of course) to test hypotheses at a fraction of the cost of undertaking the actual activities. Models are an efficient communication tool for showing how the target functions and stimulating creative thinking about how to deal with an opponent.

 

Models are essential when dealing with complex targets (Analysis Principle 5-1). Without a device to capture the full range of thinking and creativity that occurs in the target-centric approach to intelligence, an analyst would have to keep in mind far too many details. Furthermore, in the target-centric approach, the customer of intelligence is part of the collaborative process. Presented with a model as an organizing construct for thinking about the target, customers can contribute pieces to the model from their own knowledge—pieces that the analyst might be unaware of. The primary suppliers of information (the collectors) can do likewise.

 

The Concept of a Model

 

A model, as used in intelligence, is an organizing constraint. It is a combination of facts, hypotheses, and assumptions about a target, developed in a form that is useful for analyzing the target and for customer decision making (producing actionable intelligence). The type of model used in intelligence typically comprises facts, hypotheses, and assumptions, so it’s important to distinguish them here:

 

Fact. Something that is indisputably the case.

Hypothesis. A proposition that is set forth to explain developments or observed phenomena. It can be posed as conjecture to guide research (a working hypothesis) or accepted as a highly probable conclusion from established facts.

Assumption. A thing that is accepted as true or as certain to happen, without proof.

 

These are the things that go into a model. But, it is important to distinguish them when you present the model. Customers should never wonder whether they are hearing facts, hypotheses, or assumptions.

 

A model is a replica or representation of an idea, an object, or an actual system. It often describes how a system behaves. Instead of interacting with the real system, an analyst can create a model that corresponds to the actual one in certain ways.

 

 

Physical models are a tangible representation of something. A map, a globe, a calendar, and a clock are all physical models. The first two represent the Earth or parts of it, and the latter two represent time. Physical models are always descriptive.

 

Conceptual models—inventions of the mind—are essential to the analytic process. They allow the analyst to describe things or situations in abstract terms both for estimating current situations and for predicting future ones.”

 

 

A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action.

 

A decision-support model—that is, a model used to choose among competing alternatives—is normative.

 

 

A conceptual model may be either descriptive, describing what it represents, or normative. A normative model may contain some descriptive segments, but its purpose is to describe a best, or preferable, course of action. A decision-support model—that is, a model used to choose among competing alternatives—is normative.

In intelligence analysis, the models of most interest are conceptual and descriptive rather than normative. Some common traits of these conceptual models follow.

 

Descriptive models can be deterministic or stochastic.

In a deterministic model the relationships are known and specified explicitly. A model that has any uncertainty incorporated into it is a stochastic model (meaning that probabilities are involved), even though it may have deterministic properties.

 

Descriptive models can be linear or nonlinear.

Linear models use only linear equations (for example, x = Ay + B) to describe relationships.

 

Nonlinear models use any type of mathematical function. Because nonlinear models are more difficult to work with and are not always capable of being analyzed, the usual practice is to make some compromises so that a linear model can be used.

 

Descriptive models can be static or dynamic.

A static model assumes that a specific time period is being analyzed and the state of nature is fixed for that time period. Static models ignore time-based variances. For example, one cannot use them to determine the impact of an event’s timing in relation to other events. Returning to the example of a combat model, a snapshot of the combat that shows where opposing forces are located and their directions of movement at that instant is static. Static models do not take into account the synergy of the components of a system, where the actions of separate elements can have a different effect on the system than the sum of their individual effects would indicate. Spreadsheets and most relationship models are static.

 

Dynamic modeling (also known as simulation) is a software representation of the time-based behavior of a system. Where a static model involves a single computation of an equation, a dynamic model is iterative; it constantly recomputes its equations as time changes.

 

Descriptive models can be solvable or simulated.

A solvable model is one in which there is an analytic way of finding the answer. The performance model of a radar, a missile, or a warhead is a solvable problem. But other problems require such a complicated set of equations to describe them that there is no way to solve them. Worse still, complex problems typically cannot be described in a manageable set of equations. In complex cases—such as the performance of an economy or a person—one can turn to simulation.

 

Using Target Models for Analysis

 

Operations

Intelligence services prefer specific sources of intelligence, shaped in part by what has worked for them in the past; by their strategic targets; and by the size of their pocketbooks. The poorer intelligence services rely heavily on open source (including the web) and HUMINT, because both are relatively inexpensive. COMINT also can be cheap, unless it is collected by satellites. The wealthier services also make use of satellite-collected imagery intelligence (IMINT) and COMINT, and other types of technical collection.

 

“China relies heavily on HUMINT, working through commercial organizations, particularly trading firms, students, and university professors far more than most other major intelligence powers do.

 

In addition to being acquainted with opponents’ collection habits, CI also needs to understand a foreign intelligence service’s analytic capabilities. Many services have analytic biases, are ethnocentric, or handle anomalies poorly. It is important to understand their intelligence communications channels and how well they share intelligence within the government. In many countries, the senior policymaker or military commander is the analyst. That provides a prime opportunity for “perception management,” especially if a narcissistic leader like Hitler, Stalin, or Saddam Hussein is in charge and doing his own analysis. Leaders and policymakers find it difficult to be objective; they are people of action, and they always have an agenda. They have lots of biases and are prone to wishful thinking.

 

Linkages

Almost all intelligence services have liaison relationships with foreign intelligence or security services. It is important to model these relationships because they can dramatically extend the capabilities of an intelligence service.

 

Summary

Two conceptual frameworks are invaluable for doing intelligence analysis. One deals with the instruments of national or organizational power and the effects of their use. The second involves the use of target models to produce analysis.

 

The intelligence customer has four instruments of national or organizational power, as discussed in chapter 2. Intelligence is concerned with how opponents will use those instruments and the effects that result when customers use them. Viewed from both the opponent’s actions and the effects perspectives, there are usually six factors to consider: political, military, economic, social, infrastructure, and information, abbreviated PMESII:

 

 

Political. The distribution of power and control at all levels of governance.

 

Military. The ability of all relevant actors (enemy, friendly, and neutral) to exercise force.

 

Economic. Behavior relating to producing, distributing, and consuming resources.

 

Social. The cultural, religious, and ethnic composition of a region and the beliefs, values, customs, and behaviors of people.

 

Infrastructure. The basic facilities, services, and installations needed for the functioning of a community or society.

 

Information. The nature, scope, characteristics, and effects of individuals, organizations, and systems that collect, process, disseminate, or act on information.”

 

 

Models in intelligence are typically conceptual and descriptive. The easiest ones to work with are deterministic, linear, static, solvable, or some combination. Unfortunately, in the intelligence business the target models tend to be stochastic, nonlinear, dynamic, and simulated.

 

From an existing knowledge base, a model of the target is developed. Next, the model is analyzed to extract information for customers or for additional collection. The “model” of complex targets will typically be a collection of associated models that can serve the purposes of intelligence customers and collectors.

 

Chapter 6

Overview of Models in Intelligence

 

One picture is worth more than ten thousand words.

Chinese proverb

 

“The process of populating the appropriate model is known as synthesis, a term borrowed from the engineering disciplines. Synthesis is defined as putting together parts or elements to form a whole—in this case, a model of the target. It is what intelligence analysts do, and their skill at it is a primary measure of their professional competence. ” .

 

 

Creating a Conceptual Model

 

 

The first step in creating a model is to define the system that encompasses the intelligence issues of interest, so that the resulting model answers any problem that has been defined by using the issue definition process.

 

few questions in strategic intelligence or in-depth research can be answered by using a narrowly defined target.

 

For the complex targets that are typical of in-depth research, an analyst usually will deal with a complete system, such as an air defense system that will use the new fighter aircraft

 

In law enforcement, analysis of an organized crime syndicate involves consideration of people, funds, communications, operational practices, movement of goods, political relationships, and victims. Many intelligence problems will require consideration of related systems as well. The energy production system, for example, will give rise to intelligence questions about related companies, governments, suppliers and customers, and nongovernmental organizations (such as environmental advocacy groups). The questions that customers pose should be answerable by reference only to the target model, without the need to reach beyond it.

 

A major challenge in defining the relevant system is to use restraint. The definition must include essential subsystems or collateral systems, but nothing more. Part of an analyst’s skill lies in being able to include in a definition the relevant components, and only the relevant components, that will address the issue.

 

The systems model can therefore be structural, functional, process oriented, or any combination thereof. Structural models include actors, objects, and the organization of their relationships to each other. Process models focus on interactions and their dynamics. Functional models concentrate on the results achieved, for example, a model that simulates the financial consequences of a proposed trade agreement.

 

After an analyst has defined the relevant system, the next step is to select the generic models, or model templates, to be used. These model templates then will be made specific, or “populated,” using evidence (discussed in chapter 7). Several types of generic models are used in intelligence. The three most basic types are textual, mathematical, and visual.

 

Textual Models

 

Almost any model can be described using written text. The CIA’s World Factbook is an example of a set of textual models—actually a series of models (political, military, economic, social, infrastructure, and information)—of a country. Some common examples of textual models that are used in intelligence analysis are lists, comparative models, profiles, and matrix models.

 

 

 

 

Lists

 

Lists and outlines are the simplest examples of a model.

 

The list continues to be used by analysts today for much the same purpose—to reach a yes-or-no decision.

 

Comparative Models

 

Comparative techniques, like lists, are a simple but useful form of modeling that typically does not require a computer simulation. Comparative techniques are used in government, mostly for weapons systems and technology analyses. Both governments and businesses use comparative models to evaluate a competitor’s operational practices, products, and technologies. This is called benchmarking.

 

A powerful tool for analyzing a competitor’s developments is to compare them with your own organization’s developments. Your own systems or technologies can provide a benchmark for comparison.

 

Comparative models have to be culture specific to help avoid mirror imaging.

 

A keiretsu is a network of businesses, usually in related industries, that own stakes in one another and have board members in common as a means of mutual security. A network of essentially captive (because they are dependent on the keiretsu) suppliers provides the raw material for the keiretsu manufacturers, and the keiretsu trading companies and banks provide marketing services. Keiretsu have their roots in prewar Japan.

 

Profiles

 

Profiles are models of individuals—in national intelligence, of leaders of foreign governments; in business intelligence, of top executives in a competing organization; in law enforcement, of mob leaders and serial criminals.

 

 

Profiles depend heavily on understanding the pattern of mental and behavioral traits that are shared by adult members of a society—referred to as the society’s modal personality. Several modal personality types may exist in a society, and their common elements are often referred to as national character.

 

Defining the modal personality type is beyond the capabilities of the journeyman intelligence analyst, and one must turn to experts.

 

 

The modal personality model usually includes at least the following elements:

 

Concept of self—the conscious ideas of what a person thinks he or she is, along with the frequently unconscious motives and defenses against ego-threatening experiences such as withdrawal of love, public shaming, guilt, or isolation.

 

Relation to authority—how an individual adapts to authority figures

Modes of impulse control and expressing emotion

Processes of forming and manipulating ideas”

 

 

Three model types are often used for studying modal personalities and creating behavioral profiles:

 

Cultural pattern models are relatively straightforward to analyze and are useful in assessing group behavior.

 

 

Child-rearing systems can be studied to allow the projection of adult personality patterns and behavior. They may allow more accurate assessments of an individual than a simple study of cultural patterns, but they cannot account for the wide range of possible pattern variations occurring after childhood.

 

Individual assessments are probably the most accurate starting points for creating a behavioral model, but they depend on detailed data about the specific individual. Such data are usually gathered from testing techniques; the Rorschach (or “Inkblot”) test—a projective personality assessment based on the subject’s reactions to a series of ten inkblot pictures—is an example.

 

Interaction Matrices

A textual variant of the spreadsheet (discussed later) is the interaction matrix, a valuable analytic tool for certain types of synthesis. It appears in various disciplines and under different names and is also called a parametric matrix or a traceability matrix.

 

Mathematical Models

The most common modeling problem involves solving an equation. Most problems in engineering or technical intelligence are single equations of the form.

 

Most analysis involves fixing all of the variables and constants in such an equation or system of equations, except for two variables. The equation is then solved repetitively to obtain a graphical picture of one variable as a function of another. A number of software packages perform this type of solution very efficiently. For example, as a part of radar performance analysis, the radar range equation is solved for signal-to-noise ratio as a function of range, and a two-dimensional curve is plotted. Then, perhaps, signal-to-noise ratio is fixed and a new curve plotted for radar cross-section as a function of range.

 

Often the requirement is to solve an equation, get a set of ordered pairs, and plug those into another equation to get a graphical picture rather than solving simultaneous equations.

 

Spreadsheets

 

The computer is a powerful tool for handling the equation-solution type of problem. Spreadsheet software has made it easy to create equation-based models. The rich set of mathematical functions that can be incorporated in it, and its flexibility, make the spreadsheet a widely used model in intelligence.

 

Simulation Models

 

A simulation model is a mathematical model of a real object, a system, or an actual situation. It is useful for estimating the performance of its real-world analogue under different conditions. We often wish to determine how something will behave without actually testing it in real life. So simulation models are useful for helping decision makers choose among alternative actions by determining the likely outcomes of those actions.

 

In intelligence, simulation models also are used to assess the performance of opposing weapons systems, the consequences of trade embargoes, and the success of insurgencies.

 

Simulation models can be challenging to build. The main challenge usually is validation: determining that the model accurately represents what it is supposed to represent, under different input conditions.

 

Visual Models

 

Models can be described in written text, as noted earlier. But the models that have the most impact for both analysts and customers in facilitating understanding take a visual form.

 

Visualization involves transforming raw intelligence into graphical, pictorial, or multimedia forms so that our brains can process and understand large amounts of data more readily than is possible from simply reading text. Visualization lets us deal with massive quantities of data and identify meaningful patterns and structures that otherwise would be incomprehensible.

 

 

Charts and Graphs

 

Graphical displays, often in the form of curves, are a simple type of model that can be synthesized both for analysis and for presenting the results of analysis.

 

 

Pattern Models

 

Many types of models fall under the broad category of pattern models. Pattern recognition is a critical element of all intelligence

 

Most governmental and industrial organizations (and intelligence services) also prefer to stick with techniques that have been successful in the past. An important aspect of intelligence synthesis, therefore, is recognizing patterns of activities and then determining in the analysis phase whether (a) the patterns represent a departure from what is known or expected and (b) the changes in patterns are significant enough to merit attention. The computer is a valuable ally here; it can display trends and allow the analyst to identify them. This capability is particularly useful when trends would be difficult or impossible to find by sorting through and mentally processing a large volume of data. Pattern analysis is one way to effectively handle complex issues.

 

One type of pattern model used by intelligence analysts relies on statistics. In fact, a great deal of pattern modeling is statistical. Intelligence deals with a wide variety of statistical modeling techniques. Some of the most useful techniques are easy to learn and require no previous statistical training.

 

Histograms, which are bar charts that show a frequency distribution, are one example of a simple statistical pattern.

 

Advanced Target Models

 

The example models introduced so far are frequently used in intelligence. They’re fairly straightforward and relatively easy to create. Intelligence also makes use of four model types that are more difficult to create and to analyze, but that give more in-depth analysis. We’ll briefly introduce them here.

 

Systems Models

 

Systems models are well known in intelligence for their use in assessing the performance of weapons systems.

 

 

Systems models have been created for all of the following examples:

 

A republic, a dictatorship, or an oligarchy can be modeled as a political system.

 

Air defense systems, carrier strike groups, special operations teams, and ballistic missile systems all are modeled as military systems.

 

Economic systems models describe the functioning of capitalist or socialist economies, international trade, and informal economies.

 

Social systems include welfare or antipoverty programs, health care systems, religious networks, urban gangs, and tribal groups.

 

Infrastructure systems could include electrical power, automobile manufacturing, railroads, and seaports.

 

A news gathering, production, and distribution system is an example of an information system.

Creating a systems model requires an understanding of the system, developed by examining the linkages and interactions between the elements that compose the system as a whole.

 

 

A system has structure. It is comprised of parts that are related (directly or indirectly). It has a defined boundary physically, temporally, and spatially, though it can overlap with or be a part of a larger system.

 

A system has a function. It receives inputs from, and sends outputs into, an outside environment. It is autonomous in fulfilling its function. A main battle tank standing alone is not a system. A tank with a crew, fuel, ammunition, and a communications subsystem is a system.

 

A system has a process that performs its function by transforming inputs into outputs.

 

 

Relationship Models

 

Relationships among entities—people, places, things, and events—are perhaps the most common subject of intelligence modeling. There are four levels of such relationship models, each using increasingly sophisticated analytic approaches: hierarchy, matrix, link, and network models. The four are closely related, representing the same fundamental idea at increasing levels of complexity.

 

Relationship models require a considerable amount of time to create, and maintaining the model (known to those who do it as “feeding the beast”) demands much effort. But such models are highly effective in analyzing complex problems, and the associated graphical displays are powerful in persuading customers to accept the results.

 

Hierarchy Models

 

The hierarchy model is a simple tree structure. Organizational modeling naturally lends itself to the creation of a hierarchy, as anyone who ever drew an organizational chart is aware. A natural extension of such a hierarchy is to use a weighting scheme to indicate the importance of individuals or suborganizations in it.

 

Matrix Models

 

The interaction matrix was introduced earlier. The relationship matrix model is different. It portrays the existence of an association, known or suspected, between individuals. It usually portrays direct connections such as face-to-face meetings and telephone conversations. Analysts can use association matrices to identify those personalities and associations needing a more in-depth analysis to determine the degree of relationships, contacts, or knowledge between individuals.

 

Link Models

 

A link model allows the view of relationships in more complex tree structures. Though it physically resembles a hierarchy model (both are trees), a link model differs in that it shows different kinds of relationships but does not indicate subordination.

 

Network Models

 

A network model can be thought of as a flexible interrelationship of multiple tree structures at multiple levels. The key limitation of the matrix model discussed earlier is that although it can deal with the interaction of two hierarchies at a given level, because it is a two-dimensional representation, it cannot deal with interactions at multiple levels or with more than two hierarchies. Network synthesis is an extension of the link or matrix synthesis concept that can handle such complex problems. There are several types of network models. Two are widely used in intelligence:

 

Social network models show patterns of human relationships. The nodes are people, and the links show that some type of relationship exists.

 

Target network models are most useful in intelligence. The nodes can be any type of entity—people, places, things, concepts—and the links show that some type of relationship exists between entities.

 

Spatial and Temporal Models

 

Another way to examine data and to search for patterns is to use spatial modeling—depicting locations of objects in space. Spatial modeling can be used effectively on a small scale. For example, within a building, computer-aided design/computer-aided modeling, known as CAD/CAM, can be a powerful tool for intelligence synthesis. Layouts of buildings and floor plans are valuable in physical security analysis and in assessing production capacity.

.

 

Spatial modeling on larger scales is usually called geospatial modeling.

 

Patterns of activity over time are important for showing trends. Pattern changes are often used to compare how things are going now with how they went last year (or last decade). Estimative analysis often relies on chronological models.

 

Scenarios

Arguably the most important model for estimative intelligence purposes is the scenario, a very sophisticated model.

 

Alternative scenarios are used to model future situations. These scenarios increasingly are produced as virtual reality models because they are powerful ways to convey intelligence and are very persuasive.

Target Model Combinations

Almost all target models are actually combinations of many models. In fact, most of the models described in the previous sections can be merged into combination mod- els. One simple example is a relationship-time display.

This is a dynamic model where link or network nodes and links (relationships) change, appear, and disappear over time.

We also typically want to have several distinct but interrelated models of the target in order to be able to answer different customer questions.

Submodels

One type of component model is a submodel, a more detailed breakout of the top-level model. It is typical, for complex targets, to have many such submodels of a target that provide different levels of detail.

Participants in the target-centric process then can reach into the model set to pull out the information they need. The collectors of information can drill down into more detail to refine collection targeting and to fill specific gaps.

The intelligence customer can drill down to answer questions, gain confidence in the analyst’s picture of the target, and understand the limits of the analyst’s work. The target model is a powerful collaborative tool.

Collateral Models

In contrast to the submodel, a collateral model may show particular aspects of the overall target model, but it is not simply a detailed breakout of a top-level model. A collateral model typically presents a different way of thinking about the target for a specific intelligence purpose.

The collateral models in Figures 6-7 to 6-9 are examples of the three general types—structural, functional, and process—used in systems analysis. Figures 6-7 and 6-8 are structural models. Figure 6-9 is both a process model and a functional mod- el. In analyzing complex intelligence targets, all three types are likely to be used.

These models, taken together, allow an analyst to answer a wide range of customer questions.

More complex intelligence targets can re- quire a combination of several model types. They may have system characteristics, take a network form, and have spatial and temporal characteristics.

Alternative and Competitive Target Models

Alternative and competitive models are somewhat different things, though they are frequently confused with each other.

Alternative Models

Alternative models are an essential part of the synthesis process. It is important to keep more than one possible target model in mind, especially as conflicting or contradict- ory intelligence information is collected.

 

“The disciplined use of alternative hypotheses could have helped counter the natural cognitive tendency to force new information into existing paradigms.” As law professor David Schum has noted, “the generation of new ideas in fact investigation usually rests upon arranging or juxtaposing our thoughts and evidence in different ways.” To do that we need multiple alternative models.

And, the more inclusive you can be when defining alternative models, the better…

In studies listing the analytic pitfalls that hampered past assessments, one of the most prevalent is failure to consider alternative scenarios, hypotheses, or models.

Analysts have to guard against allowing three things to interfere with their need to develop alternative models:

  • Ego. Former director of national intelligence Mike McConnell once observed that analysts inherently dislike alternative, dissenting, or competitive views. But, the opposite becomes true of analysts who operate within the target-centric approach—the focus is not on each other anymore, but instead on contributing to a shared target model.
  • Time. Analysts are usually facing tight deadlines. They must resist the temptation to go with the model that best fits the evidence without considering alternatives. Otherwise, the result is premature closure that can cost dearly in the end result.
  • The customer. Customers can view a change in judgment as evidence that the original judgment was wrong, not that new evidence forced the change. Furthermore, when presented with two or more target models, customers will tend to pick the one that they like best, which may or may not be the most likely model. Analysts know this.

 

It is the analyst’s responsibility to establish a tone of setting egos aside and of conveying to all participants in the process, including the customer, that time spent up front developing alternative models is time saved at the end if it keeps them from committing to the wrong model in haste.

Competitive Models

It is well established in intelligence that, if you can afford the resources, you should have independent groups providing competing analyses. This is because we’re dealing with uncertainty. Different analysts, given the same set of facts, are likely to come to different conclusions.

It is important to be inclusive when defining alternative or competitive models.

Summary

Creating a target model starts with defining the relevant system. The system model can be a structural, functional, or process model, or any combination. The next step is to select the generic models or model templates.

Lists and curves are the simplest form of model. In intelligence, comparative models or benchmarks are often used; almost any type of model can be made comparative, typically by creating models of one’s own system side by side with the target system model.

Pattern models are widely used in the intelligence business. Chronological models allow intelligence customers to examine the timing of related events and plan a way to change the course of these events. Geospatial models are popular in military intelligence for weapons targeting and to assess the location and movement of opposing forces.

Relationship models are used to analyze the relationships among elements of the tar- get—organizations, people, places, and physical objects—over time. Four general types of relationship models are commonly used: hierarchy, matrix, link, and network models. The most powerful of these, network models, are increasingly used to describe complex intelligence targets.

 

Competitive and alternative target models are an essential part of the process. Properly used, they help the analyst deal with denial and deception and avoid being trapped by analytic biases. But they take time to create, analysts find it difficult to change or chal- lenge existing judgments, and alternative models give policymakers the option to se- lect the conclusion they prefer—which may or may not be the best choice.

 

 

 

 

 

 

 

Chapter 7

 

Creating the Model

Believe nothing you hear, and only one half that you see.  – Edgar Allen Poe

This chapter describes the steps that analysts go through in populating the target model. Here, we focus on the synthesis part of the target-centric approach, often called collation in the intelligence business.

We discuss the importance of existing pieces of intelligence, both finished and raw, and how best to think about sources of new raw data.

We talk about how credentials of evidence must be established, introduce widely used in- formal methods of combining evidence, and touch on structured argumentation as a formal methodology for combining evidence.

Analysts generally go through the actions described here in service to collation. They may not think about them as separate steps and in any event aren’t likely to do them in the order presented. They nevertheless almost always do the following:

 

  • Review existing finished intelligence about the target and examine existing raw intelligence
  • Acquire new raw intelligence
  • Evaluate the new raw intelligence
  • Combine the intelligence from all sources into the target model

 

Existing Intelligence

Existing finished intelligence reports typic- ally define the current target model. So information gathering to create or revise a model begins with the existing knowledge base. Before starting an intelligence collection effort, analysts should ensure that they are aware of what has already been found on a subject.

Finished studies or reports on file at an analyst’s organization are the best place to start any research effort. There are few truly new issues.

The databases of intelligence organizations include finished intelligence reports as well as many specialized data files on specific topics. Large commercial firms typically have comparable facilities in-house, or they depend on commercially available databases.

a literature search should be the first step an analyst takes on a new project. The purpose is to both define the current state of knowledge—that is, to understand the existing model(s) of the intelligence target—and to identify the major controversies and disagreements surrounding the target model.

The existing intelligence should not be accepted automatically as fact. Few experienced analysts would blithely accept the results of earlier studies on a topic, though they would know exactly what the studies found. The danger is that, in conducting the search, an analyst naturally tends to adopt a preexisting target model.

In this case, premature closure, or a bias toward the status quo, leads the analyst to keep the existing model even when evidence indicates that a different model is more appropriate.

To counter this tendency, it’s important to do a key assumptions check on the existing model(s).

Do the existing analytic conclusions appear to be valid?

What are the premises on which these conclusions rest, and do they appear to be valid as well?

Has the underlying situation changed so that the premises may no longer apply?

Once the finished reports are in hand, the analyst should review all of the relevant raw intelligence data that already exist. Few things can ruin an analyst’s career faster than sending collectors after information that is already in the organization’s files.

Sources of New Raw Intelligence

Raw intelligence comes from a number of sources, but they typically are categorized as part of the five major “INTs” shown in this section.

 

 

 

The definitions of each INT follow:

  • Open source (OSINT). Information of potential intelligence value that is available to the general public
  • Human intelligence (HUMINT). Intelligence derived from information collected and provided by human sources
  • Measurements and signatures intelligence (MASINT). Scientific and technical intelligence obtained by quantitative and qualitative analysis of data (metric, angle, spatial, wavelength, time dependence, modulation, plasma, and hydromagnetic) derived from specific technical sensors
  • Signals intelligence (SIGINT). Intelligence comprising either individually or in combination all communications intelligence, electronics intelligence, and foreign instrumentation signals intelligence
  • Imagery intelligence (IMINT). Intelligence derived from the exploitation of collection by visual photography, infrared sensors, lasers, electro-optics, and radar sensors such as synthetic aperture radar wherein images of objects are reproduced optically or electronically on film, electronic dis- play devices, or other media

 

The taxonomy approach in this book is quite different. It strives for a breakout that focuses on the nature of the material collected and processed, rather than on the collection means.

Traditional COMINT, HUMINT, and open- source collection are concerned mainly with literal information, that is, information in a form that humans use for communication. The basic product and the general methods for collecting and analyzing literal information are usually well understood by intelligence analysts and the customers of intelligence. It requires no special exploitation after the processing step (which includes translation) to be understood. It literally speaks for itself.

Nonliteral information, in contrast, usually requires special processing and exploitation in order for analysts to make use of it.

 

The logic of this division has been noted by other writers in the intelligence business. British author Michael Herman observed that there are two basic types of collection: One produces evidence in the form of observations and measurements of things (nonlit- eral), and one produces access to human thought processes

 

The automation of data handling has been a major boon to intelligence analysts. Informa- tion collected from around the globe arrives at the analyst’s desk through the Internet or in electronic message form, ready for review and often presorted on the basis of keyword searches. A downside of this automation, however, is the tendency to treat all information in the same way. In some cases the analyst does not even know what collection source provided the information; after all, everything looks alike on the display screen. However, information must be treated depending on its source. And, no matter the source, all information must be evaluated before it is synthesized into the model—the subject to which we now turn.

Evaluating Evidence

The fundamental problem in weighing evidence is determining its credibility—its completeness and soundness.

checking the quality of information used in intelligence analysis is an ongoing, continuous process. Having multiple sources on an issue is not a substitute for having good information that has been thoroughly examined. Analysts should perform periodic checks of the information base for their analytic judgments.

Evaluating the Source

  • Is the source competent (knowledgeable about the information being given)?
  • Did the source have the access needed to get the information?
  • Does the source have a vested interest or bias?

Competence

The Anglo-American judicial system deals ef- fectively with competence: It allows people to describe what they observed with their senses because, absent disability, we are pre- sumed competent to sense things. The judi- cial system does not allow the average per- son to interpret what he or she sensed unless the person is qualified as an expert in such interpretation.

Access

The issue of source access typically does not arise because it is assumed that the source had access. When there is reason to be suspicious about the source, however, check whether the source might not have had the claimed access.

In the legal world, checks on source access come up regularly in witness cross-examinations. One of the most famous examples was the “Almanac Trial” of 1858, where Abraham Lincoln conducted the cross-examination. It was the dying wish of an old friend that

Lincoln represent his friend’s son, Duff Armstrong, who was on trial for murder. Lincoln gave his client a tough, artful, and ultimately successful defense; in the trial’s highlight, Lincoln consulted an almanac to discredit a prosecution witness who claimed that he saw the murder clearly because the moon was high in the sky. The almanac showed that the moon was lower on the horizon, and the wit- ness’s access—that is, his ability to see the murder—was called into question.

Vested Interest or Bias

In HUMINT, analysts occasionally encounter the “professional source” who sells information to as many bidders as possible and has an incentive to make the information as interesting as possible. Even the densest sources quickly realize that more interesting information gets them more money.

An intelligence organization faces a problem in using its own parent organization’s (or country’s) test and evaluation results: Many have been contaminated. Some of the test results are fabricated; some contain distortions or omit key points. An honestly conducted, objective test may be a rarity. Several reasons for this problem exist. Tests are sometimes conducted to prove or dis- prove a preconceived notion and thus unconsciously are slanted. Some results are fabricated because they would show the vulnerability or the ineffectiveness of a system and because procurement decisions often depend on the test outcomes.

Although the majority of contaminated cases probably are never discovered, history provides many examples of this issue.

In examining any test or evaluation results, begin by asking two questions:

  • Did the testing organization have a major stake in the outcome (such as the threat that a program would be canceled due to negative test results or the possibility that it would profit from positive results)?
  • Did the reported outcome support the organization’s position or interests?

If the answer to both questions is yes, be wary of accepting the validity of the test. In the pharmaceutical testing industry, for example, tests have been fraudulently conducted or the results skewed to support the regulatory approval of the pharmaceutical.

A very different type of bias can occur when collection is focused on a particular issue. This bias comes from the fact that, when you look for something in the intelligence business, you may find what you are looking for, whether or not it’s there. In looking at suspected Iraqi chemical facilities prior to 2003, analysts concluded from imagery reporting that the level of activity had increased at the facilities. But the appearance of an increase in activity may simply have been a result of an increase in imagery collection.

David Schum and Jon Morris have published a detailed treatise on human sources of intelligence analysis. They pose a set of twenty-five questions di- vided into four categories: source competence, veracity, objectivity, and observational sensitivity. Their questions cover in more explicit detail the three questions posed in thissection about competence, access, and vested interest.

Evaluating the Communications Channel

A second basic rule of weighing evidence is to look at the communications channel through which the evidence arrives.

The accuracy of a message through any communications system decreases with the length of the link or the number of intermediate nodes.

Large and complex systems tend to have more entropy. The result is often cited as “poor communication” problems in large organizations

In the business intelligence world, analysts recognize the importance of the communications channel by using the differentiating terms primary sources for firsthand information, acquired through discussions or other interaction directly with a human source, and secondary sources for information learned through an intermediary, a publication, or online. This division does not consider the many gradations of reliability, and national intelligence organizations commonly do not use the primary/secondary source division. Some national intelligence collection organizations use the term collateral to refer to intelligence gained from other collectors, but it does not have the same meaning as the terms primary and secondary as used in business intelligence.

It’s not un- heard of (though fortunately not common) for the raw intelligence to be misinterpreted or misanalyzed as it passes through the chain. Organizational or personal biases can shape the interpretation and analysis, especially of literal intelligence. It’s also possible for such biases to shape the analysis of non- literal intelligence, but that is a more difficult product for all-source analysts to challenge, as noted earlier.

Entropy has another effect in intelligence. An intelligence assertion that “X is a possibility” very often, over time and through diverse communications channels, can become “X may be true,” then “X probably is the case,” and eventually “X is a fact,” without a shred of new evidence to support the assertion. In intelligence, we refer to this as the “creeping validity” problem.

 

 

Evaluating the Credentials of Evidence

The major credentials of evidence, as noted earlier, are credibility, reliability, and inferential force. Credibility refers to the extent to which we can believe something. Reliability means consistency or replicability. Inferential force means that the evidence carries weight, or has value, in supporting a conclusion.

 

 

U.S. government intelligence organizations have established a set of definitions to distinguish levels of credibility of intelligence:

  • Fact. Verified information, something known to exist or to have happened.
  • Direct information. The content of reports, research, and reflection on an intelligence issue that helps to evaluate the likelihood that something is factual and thereby reduces uncertainty. This is information that can be considered factual because of the nature of the source (imagery, signal intercepts, and similar observations).
  • Indirect information. Information that may or may not be factual because of some doubt about the source’s reliability, the source’s lack of direct access, or the complex (non-concrete) character of the contents (hearsay from clandestine sources, foreign government reports, or local media accounts).

In weighing evidence, the usual approach is to ask three questions that are embedded in the oath that witnesses take before giving testimony in U.S. courts:

  • Is it true?
  • Is it the whole truth?
  • Is it nothing but the truth? (Is it relevant or significant?)

 

Is It True?

Is the evidence factual or opinion (someone else’s analysis)? If it is opinion, question its validity unless the source quotes evidence to support it.

How does it fit with other evidence? The relating of evidence—how it fits in—is best done in the synthesis phase. The data from different collection sources are most valuable when used together.

The synergistic effect of combining data from many sources both strengthens the conclusions and increases the analyst’s confidence in them.

 

 

 

  • HUMINT and OSINT are often melded together to give a more comprehensive picture of people, programs, products, facilities, and research specialties. This is excellent background information to interpret data derived from COMINT and IMINT.
  • Data on environmental conditions during weapons tests, acquired through specialized technical collection, can be used with ELINT and COMINT data obtained during the same test event to evaluate the cap- abilities of the opponent’s sensor systems.
  • Identification of research institutes and their key scientists and research- ers can be initially made through HUMINT, COMINT, or OSINT. Once the organization or individual has been identified by one intelligence collector, the other ones can often provide extensive additional information.
  • Successful analysis of COMINT data may require correlating raw COMINT data with external information such as ELINT and IMINT, or with knowledge of operational or technical practices.

Is It the Whole Truth?

When asking this question, it is time to do source analysis.

An incomplete picture can mislead as much as an outright lie.

 

 

 

Is It Nothing but the Truth?

It is worthwhile at this point to distinguish between data and evidence. Data become evidence only when the data are relevant to the problem or issue at hand. The simple test of relevance is whether it affects the likelihood of a hypothesis about the target.

Does it help answer a question that has been asked?

Or does it help answer a question that should be asked?

The preliminary or initial guidance from customers seldom tells what they really need to know—an important reason to keep them in the loop through the target-centric process.

Doctors encounter difficulties when they must deal with a patient who has two pathologies simultaneously. Some of the symptoms are relevant to one pathology, some to the other. If the doctor tries to fit all of the symptoms into one diagnosis, he or she is apt to make the wrong call. This is a severe enough problem for doctors, who must deal with relatively few symptoms. It is a much worse problem for intelligence analysts, who typically deal with a large volume of data, most of which is irrelevant.

Pitfalls in Evaluating Evidence

Vividness Weighting

In general, the channel for communication of intelligence should be as short as possible; but when could a short channel become a problem? If the channel is too short, the res- ult is vividness weighting—the phenomenon that evidence that is experienced directly is strongest (“seeing is believing”). Customers place the most weight on evidence that they collect themselves—a dangerous pitfall that senior executives fall into repeatedly and that makes them vulnerable to deception.

Michael Herman tells how Churchill, reading Field Marshal Erwin Rommel’s decrypted cables during World War II, concluded that the Germans were desperately short of supplies in North Africa. Basing his interpretation on this raw COMINT traffic, Churchill pressed his generals to take the offensive against Rommel. Churchill did not realize what his own intelligence analysts could have readily told him: Rommel consistently exaggerated his short- ages in order to bolster his demands for sup- plies and reinforcements.

Statistics are the least persuasive form of evidence; abstract (general) text is next; concrete (specific, focused, exemplary) text is a more persuasive form still; and visual evidence, such as imagery or video, is the most persuasive.

Weighing Based on the Source

One of the most difficult traps for an analyst to avoid is that of weighing evidence based on its source.

Favoring the Most Recent Evidence

Analysts often give the most recently acquired evidence the most weight.

The freshest intelligence—crisp, clear, and the focus of the analyst’s attention—often gets more weight than the fuzzy and half-re- membered (but possibly more important) in- formation that has had to travel down the long lines of time. The analyst has to remember this tendency and compensate for it. It sometimes helps to go back to the original (older) intelligence and reread it to bring it more freshly to mind.

Favoring or Disfavoring the Unknown

It is hard to decide how much weight to give to answers when little or no information is available for or against each one.

Trusting Hearsay

The chief problem with much of HUMINT (not including documents) is that it is hearsay evidence; and as noted earlier, the judiciary long ago learned to distrust hearsay for good reasons, including the biases of the source and the collector. Sources may deliberately distort or misinform because they want to influence policy or increase their value to the collector.

Finally, and most important, people can be misinformed or lie. COMINT can only report what people say, not the truth about what they say. So intelligence analysts have to use hearsay, but they must also weigh it accordingly.

Unquestioning Reliance on Expert Opinions

Expert opinion is often used as a tool for analyzing data and making estimates. Any intelligence community must often rely on its nation’s leading scientists, economists, and political and social scientists for insights into foreign developments.

outside experts often have issues with objectivity. With experts, an ana- lyst gets not only their expertise, but also their biases; there are those experts who have axes to grind or egos that convince them there is only one right way to do things (their way).

British counterintelligence officer Peter Wright once noted that “on the big issues, the experts are very rarely right.”

Analysts should treat expert opinion as HUMINT and be wary when the expert makes extremely positive comments (“that foreign development is a stroke of genius!”) or extremely negative ones (“it can’t be done”).

Analysis Principle 7-3

Many experts, particularly scientists, are not mentally prepared to look for deception, as intelligence officers should be. It is simply not part of the expert’s training. A second problem, as noted earlier, is that experts often are quite able to deceive themselves without any help from opponents.

Varying the way expert opinion is used is one way to attempt to head off the problems cited here. Using a panel of experts to make analytic judgments is a common method of trying to reach conclusions or to sort through a complex array of interdisciplinary data.

Such panels have had mixed results. One former CIA office director observed that “advisory panels of eminent scientists are usually useless. The members are seldom willing to commit the time to studying the data to be of much help.”

The quality of the conclusions reached by such panels depends

on several variables, including the panel’s

  • Expertise
  • Motivation to produce a quality product
  • Understanding of the problem area to be addressed
  • Effectiveness in working as a group

A major advantage of the target-centric approach is that it formalizes the process of obtaining independent opinions.

Both single-source and all-source analysts have to guard against falling into the trap of reaching conclusions too early.

Premature closure also has been described as “affirming conclusions,” based on the observation that people are inclined to verify or affirm their existing beliefs rather than modify or discredit those beliefs.

The primary danger of premature closure is not that one might make a bad assessment because the evidence is incomplete. Rather, the danger is that when a situation is changing quickly or when a major, unprecedented event occurs, the analyst will become trapped by the judgments already made. Chances increase that he or she will miss indications of change, and it becomes harder to revise an initial estimate

The counterintelligence technique of deception thrives on this tendency to ignore evidence that would disprove an existing assumption

Denial and deception succeed if one op- ponent can get the other to make a wrong initial estimate.

Combining Evidence

In almost all cases, intelligence analysis in- volves combining disparate types of evidence.

Analysts have to have methods for weighing the combined data to help them make qualitative judgments as to which conclusions the various data best support.

Convergent and Divergent Evidence

Two items of evidence are said to be conflicting or divergent if one item favors one conclusion and the other item favors a different conclusion.

two items of evidence are contradictory if they say logically opposing things.

Redundant Evidence

Convergent evidence can also be redundant. To understand the concept of redundancy, it helps to understand its importance in communications theory.

Redundant, or duplicative, evidence can have corroborative redundancy or cumulatsive redundancy. In both types, the weight of the evidence piles up to reinforce a given conclusion. A simple example illustrates the difference.

Formal Methods for Combining Evidence

The preceding sections describe some informal methods for evidence combination. It often is important to combine evidence and demonstrate the logical process of reaching a conclusion based on that evidence by careful argument. The formal process of making that argument is called structured argumentation. Such formal structured argumentation approaches have been around at least since the seventeenth century.

Structured Argumentation

Structured argumentation is an analytic process that relies on a framework to make assumptions, reasoning, rationales, and evidence explicit and transparent. The process begins with breaking down and organizing a problem into parts so that each one can be examined systematically, as discussed in earlier chapters.

As analysts work through each part, they identify the data require- ments, state their assumptions, define any terms or concepts, and collect and evaluate relevant information. Potential explanations or hypotheses are formulated and evaluated with empirical evidence, and information gaps are identified.

Formal graphical or numerical processes for combining evidence are time consuming to apply and are not widely used in intelligence analysis. They are usually reserved for cases in which the customer requires them because the issue is critically important, because the customer wants to examine the reasoning process, or because the exact probabilities associated with each alternative are import- ant to the customer.

Wigmore’s Charting Method

John Henry Wigmore was the dean of the Northwestern University School of Law in the early 1900s and author of a ten-volume treatise commonly known as Wigmore on Evidence. In this treatise he defined some principles for rational inquiry into disputed facts and methods for rigorously analyzing and ordering possible inferences from those facts.

Wigmore argued that structured argumentation brings into the open and makes explicit the important steps in an argument, and thereby makes it easier to judge both their soundness and their probative value. One of the best ways to recognize any inherent tendencies one may have in making biased or illogical arguments is to go through the body of evidence using Wigmore’s method.

  • Different symbols are used to show varying kinds of evidence: explanatory, testimonial, circumstantial, corroborative, undisputed fact, and combinations.
  • Relationships between symbols (that is, between individual pieces of evidence) are indicated by their relative positions (for example, evidence tending to prove a fact is placed be- low the fact symbol).
  • The connections between symbols indicate the probative effect of their relationship and the degree of uncertainty about the relationship.

Even proponents admit that it is too time-consuming for most practical uses, especially in intelligence analysis, where the analyst typically has limited time.

Nevertheless, making Wigmore’s approach, or something like it, widely usable in intelligence analysis would be a major contribution.

Bayesian Techniques for Combining Evidence

By the early part of the eighteenth century, mathematicians had solved what is called the “forward probability” problem: When all of the facts about a situation are known, what is the probability of a given event happening?

Intelligence analysts find this problem of far more interest than the forward probability problem, because they often must make judgments about an underlying situation from observing the events that the situation causes. Bayes developed a formula for the answer that bears his name: Bayes’ rule.

One advantage claimed for Bayesian analysis is its ability to blend the subjective probability judgments of experts with historical frequencies and the latest sample evidence.

Bayes seems difficult to teach. It is generally considered to be “advanced” statistics and, given the problem that many people (including intelligence analysts) have with traditional elementary probabilistic and statistical techniques, such a solution seems to require expertise not currently resident in the intelligence community or available only through expensive software solutions.

A Note about the Role of Information Technology

It may be impossible for new analysts today to appreciate the markedly different work environment that their counterparts faced 40 years ago. Incoming intelligence arrived at the analyst’s desk in hard copy, to be scanned, marked up, and placed in file drawers. Details about intelligence targets—installations, persons, and organizations—were often kept on 5” × 7” cards in card catalog boxes. Less tidy analysts “filed” their most interesting raw intelligence on their desktops and cabinet tops, sometimes in stacks over 2 feet high.

IT systems allow analysts to acquire raw intelligence material of interest (incoming classified cable traffic and open source) and to search, organize, and store it electronically. Such IT capabilities have been eagerly accepted and used by analysts because of their ad- vantages in dealing with the information explosion.

A major consequence of this information explosion is that we must deal with what is called “big data” in collating and analyzing intelligence. Big data has been defined as “datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze.”

Analysts, inundated by the flood, have turned to IT tools for extracting meaning from the data. A wide range of such tools exists, including ones for visualizing the data and identifying patterns of intelligence interest, ones for conducting statistical analysis, and ones for running simulation models. Analysts with responsibility for counterterrorism, organized crime, counternarcotics, counterproliferation, or financial fraud can choose from commercially available tools such as Palantir, CrimeLink, Analyst’s Notebook, NetMap, Orion, or VisuaLinks to produce matrix and link diagrams, timeline charts, telephone toll charts, and similar pattern displays.

Tactical intelligence units, in both the military and law enforcement, find geospatial analysis tools to be essential.

Some intelligence agencies also have in-house tools that replicate these capabilities. Depending on the analyst’s specialty, some tools may be more relevant than others. All, though, have definite learning curves and their database structures are generally not compatible with each other. The result is that these tools are used less effectively than they might be, and the absence of a single standard tool hinders collaborative work across intelligence organizations.

Summary

In gathering information for synthesizing the target model, analysts should start by re- viewing existing finished and raw intelligence. This provides a picture of the current target model. It is important to do a key assumptions check at this point: Do the premises that underlie existing conclusions about the target seem to be valid?

Next, the analyst must acquire and evaluate raw intelligence about the target, and fit it into the target model—a step often called col- lation. Raw intelligence is viewed and evalu- ated differently depending on whether it is literal or nonliteral. Literal sources include open source, COMINT, HUMINT, and cyber collection. Nonliteral sources involve several types of newer and highly focused collection techniques that depend heavily on processing, exploitation, and interpretation

to turn the material into usable intelligence.

Once a model template has been selected for the target, it becomes necessary to fit the relevant information into the template. Fitting the information into the model template re- quires a three-step process:

  • Evaluating the source, by determining whether the source (a) is competent, that is, knowledgeable about the information being given; (b) had the access needed to get the information; and (c) had a vested interest or bias regarding the information provided.
  • Evaluating the communications channel through which the information arrived. Information that passes through many intermediate points becomes distorted. Processors and exploiters of collected information can also have a vested interest or bias.
  • Evaluating the credentials of the evidence itself. This involves evaluating (a) the credibility of evidence, based in part on the previously completed source and communications channel evaluations; (b) the reliability; and (c) the relevance of the evidence. Relevance is a particularly important evaluation step; it is too easy to fit evidence into the wrong target model.
  • As evidence is evaluated, it must be combined and incorporated into the target mod- el. Multiple pieces of evidence can be convergent (favoring the same conclusion) or diver- gent (favoring different conclusions and leading to alternative target models). Convergent evidence can also be redundant, reinforcing a conclusion.

Tools to extract meaning from data, for example, by relation- ship, pattern, and geospatial analysis, are used by analysts where they add value that offsets the cost of “care and feeding” of the tool. Tools to support structured argumentation are available and can significantly im- prove the quality of the analytic product, but whether they will find serious use in intelligence analysis is still an open question.

Denial, Deception, and Signaling

There is nothing more deceptive than an obvious fact.

Sherlock Holmes, in “The Boscombe Valley Mystery”

In evaluating evidence and developing a target model, an analyst must constantly take into account the fact that evidence may have been deliberately shaped by an opponent.

Denial and deception are major weapons in the counterintelligence arsenal of a country or organization.

 

They may be the only weapons available for many countries to use against highly sophisticated technical intelligence.

At the opposite extreme, the opponent may intentionally shape what the analyst sees, not to mislead but rather to send a message or signal. It is important to be able to recognize signals and to understand their meaning.

Denial

Denial and deception come in many forms. Denial is somewhat more straightforward.

Deception

Deception techniques are limited only by our imagination. Passive deception might include using decoys or having the intelligence target emulate an activity that is not of intelligence interest—making a chemical or biological warfare plant look like a medical drug production facility, for example. Decoys that have been widely used in warfare include dummy ships, missiles, and tanks.

Active deception includes misinformation (false communications traffic, signals, stories, and documents), misleading activities, and double agents (agents who have been discovered and “turned” to work against their former employers), among others.

Illicit groups (for example, terrorists) con- duct most of the deception that intelligence must deal with. Illicit arms traffickers (known as gray arms traffickers) and narcotics traffickers have developed an extensive set of deceptive techniques to evade international restrictions. They use intermediaries to hide financial transactions. They change ship names or aircraft call signs en route to mislead law enforcement officials. One airline changed its corporate structure and name overnight when its name became linked to illicit activities.1 Gray arms traffickers use front companies and false end-user certificates.

Defense against Denial and Deception: Protecting Intelligence Sources and Methods

In the intelligence business, it is axiomatic that if you need information, someone will try to keep it from you. And we have noted repeatedly that if an opponent can model a system, he can defeat it. So your best defense is to deny your opponent an understanding of your intelligence capabilities. Without such understanding, the opponent cannot effectively conduct D&D.

For small governments, and in the business intelligence world, protection of sources and methods is relatively straightforward. Selective dissemination of and tight controls on intelligence information are possible. But a major government has too many intelligence customers to justify such tight restrictions. Thus these bureaucracies have established an elaborate system to simultaneously protect and disseminate intelligence information. This protection system is loosely called compartmentation, because it puts information in “compartments” and restricts access to the compartments.

In the U.S. intelligence community, the intelligence product, sources, and methods are protected by the sensitive compartmented information (SCI) system. The SCI system uses an extensive set of compartments to protect sources and methods. Only the col- lectors and processors have access to many of the compartmented materials. Much of the product, however, is protected only by standard markings such as “Secret,” and access is granted to a wide range of people.

Open-source intelligence has little or no protection because the source material is unclassified. However, the techniques for exploiting open-source material, and the specific material of interest for exploitation, can tell an opponent much about an intelligence service’s targets. For this reason, intelligence agencies that translate open source often restrict its dissemination, using markings such as “Official Use Only.”

Higher Level Denial and Deception

A few straightforward examples of denial and deception were cited earlier. But sophisticated deception must follow a careful path; it has to be very subtle (too-obvious clues are likely to tip off the deception) yet not so subtle that your opponent misses it. It is commonly used in HUMINT, but today it frequently requires multi-INT participation or a “swarm” attack to be effective. Increasingly, carefully planned and elaborate multi- INT D&D is being used by various countries. Such efforts even have been given a different name—perception management—that focuses on the end result that the effort is intended to achieve.

Perception management can be effective against an intelligence organization that, through hubris or bureaucratic politics, is reluctant to change its initial conclusions about a topic. If the opposing intelligence organization makes a wrong initial estimate, then long-term deception is much easier to pull off. If D&D are successful, the opposing organization faces an unlearning process: its predispositions and settled conclusions have to be discarded and replaced.

The best perception management results from highly selective targeting, intended to get a specific message to a specific person or organization. This requires knowledge of that person’s or organization’s preferences in intelligence—a difficult feat to accomplish, but the payoff of a successful perception management effort is very high. It can result in an opposing intelligence service making a miscall or causing it to develop a false sense of security. If you are armed with a well-developed model of the three elements of a foreign intelligence strategy —targets, operations, and linkages—an effective counterintelligence counterattack in the form of perception management or covert action is possible, as the following examples show.

The Farewell Dossier

Detailed knowledge of an opponent is the key to successful counterintelligence, as the “Farewell” operation shows. In 1980 the French internal security service Direction de la Surveillance du Territoire (DST) recruited a KGB lieutenant colonel, Vladimir I. Vetrov, codenamed “Farewell.” Vetrov gave the French some four thousand documents, de- tailing an extensive KGB effort to clandes- tinely acquire technical know-how from the West, primarily from the United States.

In 1981 French president François Mitterrand shared the source and the documents (which DST named “the Farewell Dossier”) with U.S. president Ronald Reagan.

In early 1982 the U.S. Department of Defense, the Federal Bureau of Investigation, and the CIA began developing a counterattack. Instead of simply improving U.S. defenses against the KGB efforts, the U.S. team used the KGB shopping list to feed back, through CIA-controlled channels, the items on the list—augmented with “improvements” that were designed to pass acceptance testing but would fail randomly in service. Flawed computer chips, turbines, and factory plans found their way into Soviet military and civilian factories and equipment. Misleading information on U.S. stealth technology and space defense flowed into the Soviet intelligence reporting. The resulting failures were a severe setback for major segments of Soviet industry. The most dramatic single event resulted when the United States provided gas pipeline management software that was in- stalled in the trans-Siberian gas pipeline. The software had a feature that would, at some time, cause the pipeline pressure to build up to a level far above its fracture pres- sure. The result was the Soviet gas pipeline explosion of 1982, described as the “most monumental non-nuclear explosion and fire ever seen from space.

Countering Denial and Deception

In recognizing possible deception, an analyst must first understand how deception works. Four fundamental factors have been identified as essential to deception: truth, denial, deception, and misdirection.

Truth—All deception works within the context of what is true. Truth establishes a foundation of perceptions and beliefs that are accepted by an opponent and can then be exploited in deception. Supplying the opponent with real data establishes the credibility of future communications that the opponent then relies on.

Denial—It’s essential to deny the op- ponent access to some parts of the truth. Denial conceals aspects of what is true, such as your real intentions and capabilities. Denial often is used when no deception is intended; that is, the end objective is simply to deny knowledge. One can deny without intent to deceive, but not the

converse.

Deceit—Successful deception requires the practice of deceit.

Misdirection—Deception depends on manipulating the opponent’s perceptions. You want to redirect the opponent away from the truth and toward a false perception. In operations, a feint is used to redirect the adversary’s attention away from where the real operation will occur.

 

The first three factors allow the deceiver to present the target with desirable, genuine data while reducing or eliminating signals that the target needs to form accurate perceptions. The fourth provides an attractive alternative that commands the target’s attention.

The effectiveness of hostile D&D is a direct reflection of the predictability of collection.

Collection Rules

The best way to defeat D&D is for all of the stakeholders in the target-centric approach to work closely together. The two basic rules for collection, described here, form a complementary set. One rule is intended to provide incentive for collectors to defeat D&D. The other rule suggests ways to defeat it.

The first rule is to establish an effective feedback mechanism.

Relevance of the product to intelligence questions is the correct measure of collection effectiveness, and analysts and customers—not collectors—determine relevance. The system must enforce a content-oriented evaluation of the product, because content is used to determine relevance.

The second rule is to make collection smarter and less predictable. There exist several tried-and-true tactics for doing so:

  • Don’t optimize systems for quality and quantity; optimize for content.
  • Apply sensors in new ways. Analysis groups often can help with new sensor approaches in their areas of responsibility.
  • Consider provocative techniques against D&D targets.

Probing an opponent’s system and watching the response is a useful tactic for learning more about the system. Even so, probing may have its own set of un- desirable consequences: The Soviets would occasionally chase and shoot down the reconnaissance aircraft to discourage the probing practice.

  • Hit the collateral or inferential tar- gets. If an opponent engages in D&D about a specific facility, then sup- porting facilities may allow inferences to be made or to expose the deception. Security measures around a facility and the nature and status of nearby communications, power, or transportation facilities may provide a more complete picture.
  • Finally, use deception to protect a collection capability.

The best weapon against D&D is to mis- lead or confuse opponents about intelligence capabilities, disrupt their warning programs, and discredit their intelligence services.

 

An analyst can often beat D&D simply by using several types of intelligence—HUMINT, COMINT, and so on—in combination, simultaneously, or successively. It is relatively easy to defeat one sensor or collection channel. It is more difficult to defeat all types of intelligence at the same time.

Increasingly, opponents can be expected to use “swarm” D&D, targeting several INTs in a coordinated effort like that used by the Soviets in the Cuban missile crisis and the Indian government in the Pokhran deception.

The Information Instrument

Analysts, whether single- or all-source, are the focal points for identifying D&D. In the types of conflicts that analysts now deal with opponents have made effective use of a weapon that relies on deception: using both traditional media and social media to paint a misleading picture of their adversaries. Nongovernmental opponents (insurgents and terrorists) have made effective use of this information instrument.

 

the prevalence of media reporters in all conflicts, and the easy access to social media, have given the information instrument more utility. Media deception has been used repeatedly by opponents to portray U.S. and allied “atrocities” during military campaigns in Kosovo, Iraq, Afghanistan, and Syria.

Signaling

Signaling is the opposite of denial and deception. It is the process of deliberately sending a message, usually to an opposing intelligence service.

its use depends on a good know- ledge of how the opposing intelligence ser- vice obtains and analyzes knowledge. Recognizing and interpreting an opponent’s signals is one of the more difficult challenges an analyst must face. Depending on the situation, signals can be made verbally, by actions, by displays, or by very subtle nuances that depend on the context of the signal.

In negotiations, signals can be both verbal and nonverbal.

True signals often are used in place of open declarations, to provide in- formation while preserving the right of deniability.

Analyzing signals requires examining the content of the signal and its context, timing, and source. Statements made to the press are quite different from statements made through diplomatic channels—the latter usually carry more weight.

Signaling between members of the same culture can be subtle, with high success rates of the signal being understood. Two U.S. corporate executives can signal to each other with confidence; they both understand the rules. A U.S. executive and an Indonesian executive would face far greater risks of misunderstanding each other’s signals. The cultural differences in signaling can be substantial. Cultures differ in their reliance on verbal and nonverbal signals to communicate their messages. The more people rely on nonverbal or indirect verbal signals and on context, the higher the complexity.

  • In July 1990 the U.S. State Department unintentionally sent several signals that Saddam Hussein apparently interpreted as a green light to attack Kuwait. State Department spokesperson Margaret Tutwiler said, “[W]e do not have any defense treaties with Kuwait. . . .” The next day, Ambassador April Glaspie told Saddam Hussein, “[W]e have no opinion on Arab-Arab conflicts like your border disagreement with Kuwait.” And two days before the invasion, Assistant Secretary of State John Kelly testified before the House Foreign Affairs Committee that there was no obligation on our part to come to the defense of Kuwait if it were attacked.

 

Analytic Tradecraft in a World of Denial and Deception

Writers often use the analogy that intelligence analysis is like the medical profession.

Analysts and doctors weigh evidence and reach conclusions in much the same fashion. In fact, intelligence analysis, like medicine, is a combination of art, tradecraft, and science. Different doctors can draw different conclusions from the same evidence, just as different analysts do.

But intelligence analysts have a different type of problem than doctors do. Scientific researchers and medical professionals do not routinely have to deal with denial and deception. Though patients may forget to tell them about certain symptoms, physicians typically don’t have an opponent who is trying to deny them knowledge. In medicine, once doctors have a process for treating a pathology, it will in most cases work as expected. The human body won’t develop countermeasures to the treatment. But in intelligence, your opponent may be able to identify the analysis process and counter it. If analysis becomes standardized, an opponent can predict how you will analyze the available intelligence, and then D&D become much easier to pull off.

One cannot establish a process and retain it indefinitely.

Intelligence analysis within the context of D&D is in fact analogous to being a professional poker player, especially in the games of Seven Card Stud or Texas Hold ’em. You have an opponent. Some of the opponent’s resources are in plain sight, some are hidden. You have to observe the opponent’s actions (bets, timing, facial expressions, all of which incorporate art and tradecraft) and do pattern analysis (using statistics and other tools of science).

Summary

In evaluating raw intelligence, analysts must constantly be aware of the possibility that they may be seeing material that was deliberately provided by the opposing side. Most targets of intelligence efforts practice some form of denial. Deception—providing false information—is less common than denial because it takes more effort to execute, and it can backfire.

Defense against D&D starts with your own denial of your intelligence capabilities to op- posing intelligence services.

Where one intelligence service has extensive knowledge of another service’s sources and methods, more ambitious and elaborate D&D efforts are possible. Often called perception management, these involve developing a coordinated multi-INT campaign to get the opposing service to make a wrong initial estimate. Once this happens, the opposing service faces an unlearning process, which is difficult. A high level of detailed knowledge also allows for covert actions to disrupt and discredit the opposing service.

A collaborative target-centric process helps

to stymie D&D by bringing together different perspectives from the customer, the collector, and the analyst. Collectors can be more effective in a D&D environment with the help of analysts. Working as a team, they can make more use of deceptive, unpredictable, and provocative collection methods that have proven effective in defeating D&D.

Intelligence analysis is a combination of art, tradecraft, and science. In large part, this is because analysts must constantly deal with denial and deception, and dealing with D&D is primarily a matter of artfully applying tradecraft.

Systems Modeling and Analysis

Believe what you yourself have tested and found to be reasonable.

Buddha

In chapter 3, we described the target as three things: as a complex system, as a network, and as having temporal and spatial attributes.

any entity having the attributes of structure, function, and process can be de- scribed and analyzed as a system, as noted in previous chapters.

the basic principles apply in modeling political and economic systems, as well. Systems analysis can be applied to analyze both existing systems and those under development.

A government can be considered a system and analyzed in much the same way—by creating structural, functional, and process models.

Analyzing an Existing System: The Mujahedeen Insurgency

a single weapon can be defeated, as in this case, by tactics. But the proper mix of antiair weaponry could not. The mix here included surface-to-air missiles (SA-7s, British Blowpipes, and Stinger missiles) and machine guns (Oerlikons and captured Soviet Dashika machine guns). The Soviet helicopter operators could defend against some of these, but not all simultaneously. SA-7s were vulnerable to flares; Blowpipes were not. The HINDs could stay out of range of the Dashikas, but then they would be at an effective range for the Oerlikons.3 Unable to know what they might be hit with, Soviet pilots were likely to avoid at- tacking or rely on defensive maneuvers that would make them almost ineffective—which is exactly what happened.

Analyzing a Developmental System: Methodology

In intelligence, we also are concerned about modeling a system that is un- der development. The first step in modeling a developmental system, and particularly a future weapons system, is to identify the system(s) under development. Two approaches traditionally have been applied in weapons systems analysis, both based on reasoning paradigms drawn from the writings of philosophers: deductive and inductive.

  • The deductive approach to prediction is to postulate desirable objectives, in the eyes of the opponent; identify the system requirements; and then search the incoming intelligence for evidence of work on the weapons systems, subsystems, components, devices, and basic research and development (R&D) required to reach those objectives.
  • The opposite, an inductive or synthesis approach, is to begin by looking at the evidence of development work and then synthesize the advances in systems, subsystems, and devices that are likely to follow.

A number of writers in the intelligence field have argued that intelligence uses a different method of reasoning—abduction, which seeks to develop the best hypothesis or inference from a given body of evidence. Abduction is much like induction, but its stress is on integrating the analyst’s own thoughts and intuitions into the reasoning process.

The deductive approach can be described as starting from a hypothesis and using evidence to test the hypothesis. The inductive approach is described as evidence-based reasoning to develop a conclusion.7 Evidence- based reasoning is applied in a number of professions. In medicine, it is known as evidence-based practice—applying a combination of theory and empirical evidence to make medical decisions.

Both (or all three) approaches have advantages and drawbacks. In practice, though, deduction has some advantages over induction or abduction in identifying future systems development.

The problem arises when two or more systems are under development at the same time. Each system will have its R&D process, and it can be very difficult to separate the processes out of the mass of in- coming raw intelligence. This is the “multiple pathologies” problem: When two or more pathologies are present in a patient, the symptoms are mixed together, and diagnosing the separate ill- nesses becomes very difficult. Generally, the deductive technique works better for dealing with the multiple pathologies issue in future systems assessments.

Once a system has been identified as being in development, analysis proceeds to the second step: answering customers’ questions about it. These questions usually are about the system’s functional, process, and structural characteristics—that is, about performance, schedule, risk, and cost.

As the system comes closer to completion, a wider group of customers will want to know what specific targets the system has been designed against, in what circumstances it will be used, and what its effectiveness will be. These matters typically require analysis of the system’s performance, including its suitability for operating in its environment or in accomplishing the mission for which it has been designed. The schedule for completing development and fielding the system, as well as associated risks, also become important. In some cases, the cost of development and deployment will be of interest.

Performance

Performance analyses are done on a wide range of systems, varying from simple to highly complex multidisciplinary systems. Determining the performance of a narrowly defined system is straightforward. More challenging is assessing the performance of a complex system such as an air defense network or a narcotics distribution network. Most complex system performance analysis is now done by using simulation, a topic to which we will return.

Comparative Modeling

Comparative modeling is similar to benchmarking, but the focus is on analysis of one group’s system or product performance, versus an opponent’s.

Comparing your country’s or organization’s developments with those of an opponent can involve four distinct fact patterns. Each pat- tern poses challenges that the analyst must deal with.

In short, the possibilities can be de- scribed as follows:

  • We did it—they did it.
    • We did it—they didn’t do it.
    • We didn’t do it—they did it.
    • We didn’t do it—they didn’t do it.

There are many examples of the “we did it—they did it” sort of intelligence problem, especially in industries in which competitors typically develop similar products.

In the second case, “we did it—they didn’t do it,” the intelligence officer runs into a real problem: It is almost impossible to prove a negative in intelligence. The fact that no intelligence information exists about an opponent’s development cannot be used to show that no such development exists.

The third pattern, “we didn’t do it—they did it,” is the most dangerous type that we en- counter. Here the intelligence officer has to overcome opposition from skeptics in his country, because he has no model to use for comparison.

The “we didn’t do it—they did it” case presents analysts with an opportunity to go off in the wrong direction analytically

This sort of transposition of cause and effect is not uncommon in human source report- ing. Part of the skill required of an intelli- gence analyst is to avoid the trap of taking sources too literally. Occasionally, intelli- gence analysts must spend more time than they should on problems that are even more fantastic or improbable than that of the German engine killer.

 

 

Simulation

Performance simulation typically is a parametric, sensitivity, or “what if” type of analysis; that is, the analyst needs to try a relationship between two variables (parameters), run a computer analysis and examine the results, change the input constants, and run the simulation again.

The case also illustrates the common systems analysis problem of presenting the worst- case estimate: National security plans often are made on the basis of a systems estimate; out of fear that policymakers may become complacent, an analyst will tend to make the worst case that is reasonably possible.

The Mirror-Imaging Challenge

Both comparative modeling and simulation have to deal with the risks of mirror imaging. The opponent’s system or product (such as an airplane, a missile, a tank, or a supercomputer) may be designed to do different things or to serve a different market than expected.

The risk in all systems analysis is one of mirror imaging, which is much the same as the mirror-imaging problem in decision-making.

Unexpected Simplicity

In effect, the Soviets applied a version of Occam’s razor (choose the simplest explanation that fits the facts at hand) in their industrial practice. Because they were cautious in adopting new technology, they tended to keep everything as simple as possible. They liked straightforward, proven designs. When they copied a design, they simplified it in obvious ways and got rid of the extra features that the United States tends to put on its weapons systems. The Soviets made maintenance as simple as possible, because the hardware was going to be maintained by people who did not have extensive training.

In a comparison of Soviet and U.S. small jet engine technology, the U.S. model engine was found to have 2.5 times as much materials cost per pound of weight. It was smaller and lighter than the Soviet engine, of course, but it had 12 times as many maintenance hours per flight-hour as the Soviet model, and overall the Soviet engine had a life cycle cost half that of the U.S. engine.10 The ability to keep things simple was the Soviets’ primary advantage over the United States in technology, especially military technology.

Quantity May Replace Quality

U.S. analysts often underestimated the number of units that the Soviets would produce. The United States needed fewer units of a given system to perform a mission, since each unit had more flexibility, quality, and performance ability than its Soviet counterpart. The United States forgot a lesson that it had learned in World War II—U.S. Sherman tanks were inferior to the German Tiger tanks in combat, but the United States deployed a lot of Shermans and overwhelmed the Tigers with numbers.

Schedule

The intelligence customer’s primary concern about systems under development usually centers on performance, as discussed previously.

the importance of the systems development process, which is one of the many types of processes we deal with in intelligence.

Process Models

The functions of any system are carried out by processes. The processes will be different for different systems. That’s true whether you are describing an organization, a weapons system, or an industrial system. Different types of organizations, for ex- ample—civil government, law enforcement, military, and commercial organizations—will have markedly different processes. Even similar types of organizations will have different processes, especially in different cultures.

Political, military, economic, and weapons systems analysts all use specialized process-analysis techniques.

Most processes and most process models have feedback loops. Feedback al- lows the system to be adaptive, that is, to ad- just its inputs based on the output. Even simple systems such as a home heating/air conditioning system provide feedback via a thermostat. For complex systems, feedback is essential to prevent the process from producing undesirable output. Feedback is an important part of both synthesis and analysis

Development Process Models

In determining the schedule for a systems development, we concentrate on examining the development process and identifying the critical points in that process.

An example development process model is shown in Figure 9-2. In this display, the pro- cess nodes are separated by function into “swim lanes” to facilitate analysis.

 

The Program Cycle Model

Beginning with the system requirement and progressing to production, deployment, and operations, each phase bears unique indicators and opportunities for collection and synthesis/analysis. Customers of intelligence often want to know where a major system is in this life cycle.

Different types of systems may evolve through different versions of the cycle, and product development differs somewhat from systems development. It is therefore important for the analyst to first determine the specific names and functions of the cycle phases for the target country, industry, or company and then determine exactly where the target program is in that cycle. With that information, analytic techniques can be used to predict when the program might become operational or begin producing output.

It is important to know where a program is in the cycle in order to make accurate predictions.

A general rule of thumb is that the more phases in the program cycle, the longer the process will take, all other things being equal. Countries and organizations with large, stable bureaucracies typically have many phases, and the process, whatever it may be, takes that much longer.

Program Staffing

The duration of any stage of the cycle shown in the Generic Program Cycle is determined by the type of work involved and the number and expertise of workers assigned.

 

Fred Brooks, one of the premier figures in computer systems development, defined four types of projects in his book The Mythical Man-Month. Each type of project has a unique relationship between the number of workers needed (the project loading) and the time it takes to complete the effort.

The first type of project is a perfectly partitionable task—that is, one that can be completed in half the time by doubling the number of workers.

A second type of project involves the unpartitionable task…The profile is referred to here as the “baby production curve,” because no matter how many women are assigned to the task, it takes nine months to produce a baby.

Most small projects fit the curve shown in the lower left of the figure, which is a com- bination of the first two curves. In this case a project can be partitioned into subtasks, but the time it takes for people working on different subtasks to communicate with one another will eventually balance out the time saved by adding workers, and the curve levels off.

Large projects tend to be dominated by communication. At some point, shown as the bottom point of the lower right curve, adding additional workers begins to slow the project because all workers have to spend more time in communication.

The Technology Factor

Technology is another important factor in any development schedule; and technology is neither available nor applied in the same way everywhere. An analyst in a technologically advanced country, such as the United States, tends to take for granted that certain equipment—test equipment, for example—will be readily available and will be of a certain quality.

There is also a definite schedule advantage to not being the first to develop a system. A country or organization that is not a leader in technology development has the advantage of learning from the leader’s mistakes, an ad- vantage that entails being able to keep research and development costs low and avoid wrong paths.

A basic rule of engineering is that you are halfway to a solution when you know that there is a solution, and you are three-quarters there when you know how a competitor solved the problem. It took much less time for the Soviets to develop atomic and hydrogen bombs than U.S. intelligence had predicted. The Soviets had no principles of impotence or doubts to slow them down.

Risk

Analysts often assume that the programs and projects they are evaluating will be completed on time and that the target system will work perfectly. They would seldom be so foolish in evaluating their own projects or the performance of their own organizations. Risk analysis needs to be done in any assessment of a target program.

It is typically difficult to do and, once done, difficult to get the customer to accept. But it is important to do because intelligence customers, like many analysts, also tend to assume that an opponent’s program will be executed perfectly.

One fairly simple but often overlooked approach to evaluating the probability of success is to examine the success rate of similar ventures.

Known risk areas can be readily identified from past experience and from discussions with technical experts who have been through similar projects. The risks fall into four major categories—programmatic, technical, production, and engineering. Analyzing potential problems requires identifying specific potential risks from each category. Some of these risks include the following:

 

  • Programmatic: funding, schedule, contract relationships, political issues
  • Technical: feasibility, survivability, system performance
  • Production: manufacturability, lead times, packaging, equipment
  • Engineering: reliability, maintainability, training, operations

Risk assessment assesses risks quantitatively and ranks them to establish those of most concern. A typical ranking is based on the risk factor, which is a mathematical combination of the probability of failure and the consequence of failure. This assessment requires a combination of expertise and software tools in a structured and consistent approach to ensure that all risk categories are considered and ranked.

Risk management is the definition of alternative paths to minimize risk and set criteria on which to initiate or terminate these activities. It includes identifying alternatives, options, and approaches to mitigation.

Examples are initiation of parallel developments (for example, funding two manufacturers to build a satellite, where only one satellite is needed), extensive development testing, addition of simulations to check performance predictions, design reviews by consultants, or focused management attention on specific elements of the program. A number of decision analysis tools are useful for risk management. The most widely used tool is the Program Evaluation and Review Technique (PERT) chart, which shows the interrelationships and dependencies among tasks in a program on a timeline.

Cost

Systems analysis usually doesn’t focus heavily on cost estimates. The usual assumption is that costs will not keep the system from being completed. Sometimes, though, the costs are important because of their effect on the overall economy of a country.

Estimating the cost of a system usually starts with comparative modeling. That is, you be- gin with an estimate of what it would cost your organization or an industry in your country to build something. You multiply that number by a factor that accounts for the difference in costs of the target organization (and they will always be different).

When several system models are being considered, cost-utility analysis may be necessary. Cost-utility analysis is an important part of decision prediction. Many decision-making processes, especially those that require resource allocation, make use of cost-utility analysis. For an analyst assessing a foreign military’s decision whether to produce a new weapons system, it is a useful place to start. But the analyst must be sure to take “rationality” into account. As noted earlier, what is “rational” is different across cultures and from one individual to the next. It is important for the analyst to understand the logic of the decision maker—that is, how the opposing decision maker thinks about topics such as cost and utility.

In performing cost-utility analysis, the analyst must match cost figures to the same time horizon over which utility is being assessed. This will be a difficult task if the horizon reaches past a few years away. Life-cycle costs should be considered for new systems, and many new systems have life cycles in the tens of years.

Operations Research

A number of specialized methodologies are used to do systems analysis. Operations re- search is one of the more widely used ones.

Operations research has a rigorous process for defining problems that can be usefully applied in intelligence. As one specialist in the discipline has noted, “It often occurs that the major contribution of the operations research worker is to decide what is the real problem.” Understanding the problem often requires understanding the environment and/or system in which an issue is embedded, and operations researchers do that well.

After defining the problem, the operations research process requires representing the system in mathematical form. That is, the operations researcher builds a computation- al model of the system and then manipulates or solves the model, using computers, to come up with an answer that approximates how the real-world system should function. Systems of interest in intelligence are characterized by uncertainty, so probability analysis is a commonly used approach.

Two widely used operations research techniques are linear programming and network analysis. They are used in many fields, such as network planning, reliability analysis, capacity planning, expansion capability de- termination, and quality control.

Linear Programming

Linear programming involves planning the efficient allocation of scarce resources, such as material, skilled workers, machines, money, and time.

Linear programs are simply systems of linear equations or in- equalities that are solved in a manner that yields as its solution an optimum value—the best way to allocate limited resources, for example. The optimum value is based on some single-goal statement (provided to the program in the form of what is called a linear objective function). Linear programming is often used in intelligence for estimating production rates, though it has applicability in a wide range of disciplines.

Network Analysis

In chapter 10 we’ll investigate the concept of network analysis as applied to relation- ships among entities. Network analysis in an operations research sense is not the same. Here, networks are interconnected paths over which things move. The things can be automobiles (in which case we are dealing with a network of roads), oil (with a pipeline system), electricity (with wiring diagrams or circuits), information signals (with communication systems), or people (with elevators or hallways).

In intelligence against networks, we frequently are concerned with things like maximum throughput of the system, the shortest (or cheapest) route between two or more locations, or bottlenecks in the system.

Summary

Any entity having the attributes of structure, function, and process can be described and analyzed as a system. Systems analysis is used in intelligence extensively for assessing foreign weapons systems performance. But it also is used to model political, economic, infrastructure, and social systems.

Modeling the structure of a system can rely on an inductive, a deductive, or an abductive approach.

Functional assessments typically require analysis of a system’s performance. Comparative performance analysis is widely used in such assessments. Simulations are used to prepare more sophisticated predictions of a system’s performance.

Process analysis is important for assessing organizations and systems in general. Organizational processes vary by organization type and across cultures. Process analysis also is used to determine systems development schedules and in looking at the life cycle of a program. Program staffing and the technologies involved are other factors that shape development schedules.

10

Network Modeling and Analysis

Future conflicts will be fought more by networks than by hierarchies, and whoever masters the network form will gain major advantages.

John Arquilla and David Ronfeldt, RAND Corporation

In intelligence, we’re concerned with many types of networks: communications, social, organizational, and financial networks, to name just a few. The basic principles of modeling and analysis apply across most different types of networks.

intelligence has the job of providing an advantage in conflicts by reducing uncertainty.

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It has been used for years in the U.S. intelligence community against targets such as terrorist groups and narcotics traffickers. The netwar model of multidimensional conflict between opposing networks is more and more applicable to all intelligence, and network analysis is our tool for examining the opposing network.

a few definitions:

 

  • Network—that group of elements forming a unified whole, also known as a system
    • Node—an element of a system that represents a person, place, or physical thing
    • Cell—a subordinate organization formed around a specific process, capability, or activity within a designated larger organization
  • Link—a behavioral, physical, or functional relationship between nodes

 

Link Models

Link modeling has a long history; the Los Angeles police department reportedly used it first in the 1940s as a tool for assessing organized crime networks. Its primary purpose was to display relationships among people or between people and events. Link models demonstrated their value in discerning the complex and typically circuitous ties between entities.

some types of link diagrams are referred to as horizontal relevance trees. Their essence is the graphical representation of (a) nodes and their connection patterns or (b) entities and relationships.

Most humans simply cannot assimilate all the information collected on a topic over the course of several years. Yet a typical goal of intelligence synthesis and analysis is to develop precise, reliable, and valid inferences (hypotheses, estimations, and conclusions) from the available data for use in strategic decision-making or operational planning. Link models directly support such inferences.

The primary purpose of link modeling is to facilitate the organization and presentation of data to assist the analytic process. A major part of many assessments is the analysis of relationships among people, organizations, locations, and things. Once the relationships have been created in a database system, they can be displayed and analyzed quickly in a link analysis program.

To be useful in intelligence analysis, the links should not only identify relationships among data items but also show the nature of their ties. A subject-verb-object display has been used in the intelligence community for sever- al decades to show the nature of such ties, and it is sometimes used in link displays.

Quantitative and temporal (date stamping) relationships have also been used when the display software has a filtering capability. Filters allow the user to focus on connections of interest and can simplify by several orders of magnitude the data shown in a link dis- play.

Link modeling has been replaced almost completely by network modeling, discussed next, because it offers a number of advantages in dealing with complex networks.

Network Models

Most modeling and analysis in intelligence today focuses on networks.

Some Network Types

A target network can include friendly or allied entities

It can include neutrals that your customer wishes to influence—either to become an ally or to remain neutral.

Social Networks

When intelligence analysts talk about net- work analysis, they often mean social net- work analysis (SNA). SNA involves identifying and assessing the relationships among people and groups—the nodes of the network. The links show relationships or trans- actions between nodes. So a social network model provides a visual display of relation- ships among people, and SNA provides a visual or mathematical analysis of the relationships. SNA is used to identify key people in an organization or social network and to model the flow of information within the network.

Organizational Networks

Management consultants often use SNA methodology with their business clients, referring to it as organizational network analysis. It is a method for looking at communication and social networks within a formal organization. Organizational network modeling is used to create statistical and graphical models of the people, tasks, groups, knowledge, and resources of organizations.

Commercial Networks

In competitive intelligence, network analysis tends to focus on networks where the nodes are organizations.

As Babson College professor and business analyst Liam Fahey noted, competition in many industries is now as much competition between networked enterprises

Fahey has de- scribed several such networks and defined five principal types:

  • Vertical networks. Networks organized across the value chain; for example, 3M Corporation goes from mining raw materials to delivering finished products.
  • Technology networks. Alliances with technology sources that allow a firm to maintain technological superiority,

such as the CISCO Systems network. • Development networks. Alliances fo- cused on developing new products or processes, such as the multimedia entertainment venture DreamWorks SKG.

  • Ownership networks. Networks in which a dominant firm owns part or all of its suppliers, as do the Japanese keiretsu.
  • Political networks. Those focused on political or regulatory gains for its members, for example, the National Association of Manufacturers.

Hybrids of the five are possible, and in some cultures such as in the Middle East and Far East, families can be the basis for a type of hybrid business network.

 

Financial Networks

Financial networks tend to feature links among organizations, though individuals can be important nodes, as in the Abacha family funds-laundering case. These networks focus on topics such as credit relationships, financial exposures between banks, liquidity flows in the interbank payment system, and funds-laundering transactions. The relationships among financial institutions, and the relationships of financial institutions with other organizations and individuals, are best captured and analyzed with network modeling.

Global financial markets are interconnected and therefore amenable to large-scale modeling. Analysis of financial system networks helps economists to understand systemic risk and is key to preventing future financial crises.

Threat Networks

Military and law enforcement organizations define a specific type of network, called a threat network. These are networks that are opposed to friendly networks.

Such net- works have been defined as being “comprised of people, processes, places, and material—components that are identifiable, targetable, and exploitable.”

A premise of threat network modeling is that all such networks have vulnerabilities that can be exploited. Intelligence must provide an understanding of how the network operates so that customers can identify actions to exploit the vulnerabilities.

Threat networks, no matter their type, can access political, military, economic, social, infrastructure, and information resources. They may connect to social structures in multiple ways (kinship, religion, former association, and history)—providing them with resources and support. They may make use of the global information networks, especially social media, to obtain recruits and funding and to conduct information operations to gain recognition and international support.

Other Network Views

Target networks can be a composite of the types described so far. That is, they can have social, organizational, commercial, and financial elements, and they can be threat net- works. But target networks can be labeled another way. They generally take one of the following relationship forms:

  • Functional networks. These are formed for a specific purpose. Individuals and organizations in this net- work come together to undertake activities based primarily on the skills, expertise, or particular capabilities they offer. Commercial net- works, crime syndicates, and insurgent groups all fall under this label.
  • Family and cultural networks. Some members or associates have familial bonds that may span generations. Or the network shares bonds due to a shared culture, language, religion, ideology, country of origin, and/or sense of identity. Friendship net- works fall into this category as do proximity networks—where the network has bonds due to geographic or proximity ties (such as time spent together in correctional institutions).
  • Virtual network. This is a relatively new phenomenon. In these networks, participants seldom (possibly never) physically meet, but work together through the Internet or some other means of communication. Networks involved in online fraud, theft, or funds laundering are usually virtual networks. Social media often are used to operate virtual networks.

Modeling the Network

Target networks can be modeled manually, or by using computer algorithms to automate the process. Using open-source and classified HUMINT or COMINT, an analyst typically goes through the following steps in manually creating a network model:

  • Understand the environment.

You should start by understanding the setting in which the network operates. That may require looking at all six of the PMESII factors that constitute the environment, and almost certainly at more than one of these factors. This approach applies to most networks of intelligence interest, again recognizing that “military” refers to that part of the network that applies force (usually physical force) to serve network interests. Street gangs and narcotics traffickers, for example, typically have enforcement arms.

  • Select or create a network template.

Pattern analysis, link analysis, and social network analysis are the foundational analytic methods that enable intelligence analysts to begin templating the target network. To begin with, are the networks centralized or decentralized? Are they regional or transnational? Are they virtual, familial, or functional? Are they a combination? This information provides a rough idea of their structure, their adaptability, and their resistance to disruption.

  • Populate the network.

If you don’t have a good idea what the network template looks like, you can apply a technique that is sometimes called “snowballing.” You begin with a few key members of the target network. Then add nodes and linkages based on the information these key members provide about others. Over time, COMINT and other collection sources (open source, HUMINT) al- low the network to be fleshed out. You identify the nodes, name them, and determine the linkages among them. You also typically need to determine the nature of the link. For example, is it a familial link, a trans- actional link, or a hostile link

Computer-Assisted and Automated Modeling

Although manual modeling is still used, commercially available network tools such as Analyst’s Notebook and Palantir are now available to help. One option for using these tools is to enter the data manually but to rely on the tool to create and manipulate the network model electronically.

Analyzing the Network

Analyzing a network involves answering the classic questions—who-what-where-when- how-why—and placing the answers in a format that the customer can understand and act upon, what is known as “actionable intelligence.” Analysis of the network pattern can help identify the what, when, and where. Social network analysis typically identifies who. And nodal analysis can tell how and why.

Nodal Analysis

As noted throughout this book, nodes in a target network can include persons, places, objects, and organizations (which also could be treated as separate networks). Where the node is an organization, it may be appropriate to assess the role of the organization in the larger network—that is, to simply treat it as a node.

The usual purpose of nodal analysis is to identify the most critical nodes in a target network. This requires analyzing the properties of individual nodes, and how they affect or are affected by other nodes in the network. So the analyst must understand the behavior of many nodes and, where the nodes are organizations, the activities taking place within the nodes.

Social Network Analysis

Social network analysis, in which all of the network nodes are persons or groups, is widely used in the social sciences, especially in studies of organizational behavior. In intelligence, as noted earlier, we more frequently use target network analysis, in which almost anything can be a node.

 

To understand a social network, we need a full description of the social relationships in the network. Ideally, we would know about every relationship between each pair of actors in the network.

In summary, SNA is a tool for understanding the internal dynamics of a target network and how best to attack, exploit, or influence it. Instead of assuming that taking out the leader will disrupt the network, SNA helps to identify the distribution of power in the net- work and the influential nodes—those that can be removed or influenced to achieve a desired result. SNA also is used to describe how a network behaves and how its connectivity shapes its behavior.

Several analytic concepts that come along with SNA also apply to target network ana- lysis. The most useful concepts are centrality and equivalence. These are used today in the analysis of intelligence problems related to terrorism, arms networks, and illegal narcotics organizations.

the extent to which an actor can reach others in the network is a major factor in determining the power that the actor wields. Three basic sources of this advantage are high degree, high closeness, and high betweenness.

Actors who have many network ties have greater opportunities because they have choices. Their rich set of choices makes them less dependent than those with fewer ties and hence more powerful.

The network centrality of the individuals removed will determine the extent to which the removal impedes continued operation of the activity. Thus centrality is an important ingredient (but by no means the only one) in considering the identification of net- work vulnerabilities.

A second analytic concept that accompanies SNA is equivalence. The disruptive effectiveness of removing one individual or a set of individuals from a network (such as by making an arrest or hiring a key executive away from a business competitor) depends not only on the individual’s centrality but also on some notion of his uniqueness, that is, on whether or not he has equivalents.

The notion of equivalence is useful for strategic targeting and is tied closely to the concept of centrality. If nodes in the social network have a unique role (no equivalents), they will be harder to replace.

Network analysis literature offers a variety of concepts of equivalence. Three in particular are quite distinct and, between them, seem to capture most of the important ideas on the subject. The three concepts are substitutability, stochastic equivalence, and role equivalence. Each can be important in specific analysis and targeting applications.

Substitutability is easiest to understand; it can best be described as interchangeability. Two objects or persons in a category are substitutable if they have identical relationships with every other object in the category.

Individuals who have no network substitutes usually make the most worthwhile targets for removal.

Substitutability also has relevance to detecting the use of aliases. The use of an alias by a criminal will often show up in a network analysis as the presence of two or more substitutable individuals (who are in reality the same person with an alias). The interchangeability of the nodes actually indicates the interchangeability of the names.

Stochastic equivalence is a slightly more sophisticated idea. Two network nodes are stochastically equivalent if the probabilities of their being linked to any other particular node are the same. Narcotics dealers working for one distribution organization could be seen as stochastically equivalent if they, as a group, all knew roughly 70 percent of the group, did not mix with dealers from any other organizations, and all received their narcotics from one source.

Role equivalence means that two individuals play the same role in different organizations, even if they have no common acquaintances at all. Substitutability implies role equivalence, but not the converse.

Stochastic equivalence and role equivalence are useful in creating generic models of target organizations and in targeting by analogy—for example, the explosives expert is analogous to the biological expert in planning collection, analyzing terrorist groups, or attacking them.

Organizational Network Analysis

Organizational network analysis is a well-developed discipline for analyzing organizational structure. The traditional hierarchical description of an organizational structure does not sufficiently portray entities and their relationships.

the typical organization also is a system that can be viewed (and analyzed) from the same  three perspectives previously discussed:

structure, function, and process.

Structure here refers to the components of the organization, especially people and their relation- ships; this chapter deals with that.

Function refers to the outcome or results produced and tends to focus on decision making.

Process describes the sequences of activities and the expertise needed to produce the results or outcome. Fahey, in his assessment of organizational infrastructure, described four perspectives: structure, systems, people, and decision-making processes. Whatever their names, all three (or four, following Fahey’s example) perspectives must be considered.

Depending on the goal, an analyst may need to assess the network’s mission, its power distribution, its human resources, and its decision- making processes. The analyst might ask questions such as, Where is control exercised? Which elements provide support ser- vices? Are their roles changing? Network analysis tools are valuable for this sort of analysis.

Threat Network Analysis

We want to develop a detailed understanding of how a threat network functions by identifying its constituent elements, learning how its internal processes work to carry out operations, and seeing how all of the network components interact.

assessing threat networks requires, among other things, looking at the

  • Command-and-control structure. Threat networks can be decentralized, or flat. They can be centralized, or hierarchical. The structures will vary, but they are all designed to facilitate the attainment of the net- work’s goals and continued survival.
  • Closeness. This is a measure of the members’ shared objectives, kinship, ideology, religion, and personal relations that bond the network and facilitate recruiting new members.
    • Expertise. This includes the know- ledge, skills, and abilities of group leaders and members.
    • Resources. These include weapons, money, social connections, and public support.
  • Adaptability. This is a measure of the network’s ability to learn and adjust behaviors and modify operations in response to opposing actions.
  • Sanctuary. These are locations where the network can safely conduct planning, training, and resupply.

Primary is the ability to adapt over time, specifically to blend into the local population and to quickly replace losses of key personnel and recruit new members. The networks also tend to be difficult to penetrate because of their insular nature and the bonds that hold them together. They typically are organized into cells in a loose network where the loss of one cell does not seriously degrade the entire network.

To carry out the network’s functions, they must engage in activities that expose parts of the network to countermeasures.

They must communicate between cells and with their leadership, exposing the network to discovery and mapping of links.

Target Network Analysis

As we have said, in intelligence work we usually apply an extension of social network analysis that retains its basic concepts. So the techniques described earlier for SNA work for almost all target networks. But whereas all of the entities in SNA are people, again, in target network analysis they can be anything.

Automating the Analysis

Target network analysis has become one of the principal tools for dealing with complex systems, thanks to new, computer-based analytic methods. One tool that has been useful in assessing threat networks is the Organization Risk Analyzer (called *ORA) developed by the Computational Analysis of Social and Organizational Systems (CASOS) at Carnegie Mellon University. *ORA is able to group nodes and identify patterns of ana- lytic significance. It has been used to identify key players, groups, and vulnerabilities, and to model network changes over space and time.

Intelligence analysis relies heavily on graphical techniques to represent the descriptions of target networks compactly. The underlying mathematical techniques allow us to use computers to store and manipulate the information quickly and more accurately than we could by hand.

Summary

One of the most powerful tools in the analyst’s toolkit is network modeling and analysis. It is widely used in analysis disciplines. It is derived from link modeling, which organizes and presents raw intelligence in a visual form such that relationships among nodes (which can be people, places, things, organizations, or events) can be analyzed to extract finished intelligence.

We prefer to have network models created and updated automatically from raw intelligence data by software algorithms. Although some software tools exist for doing that, the analyst still must evaluate the sources and validate the results.

 

 

11 Geospatial and Temporal Modeling and Analysis

GEOINT is the professional practice of integrating and interpreting all forms of geospatial data to create historical and anticipatory intelligence products used for planning or that answer questions posed by decision-makers.

This definition incorporates the key ideas of an intelligence mission: all-source analysis and modeling in both space and time (from “historical and anticipatory”). These models are frequently used in analysis; insights about networks are often obtained by examining them in spatial and temporal ways.

  • During World War II, although the Germans maintained censorship as effectively as anyone else, they did publish their freight tariffs on all goods, including petroleum products. Working from those tariffs, a young U.S. Office of Strategic Services analyst, Walter Levy, conducted geospatial modeling based on the German railroad network to pinpoint the ex- act location of the refineries, which were subsequently targeted by allied bombers.

Static Geospatial Models

In the most general case, geospatial modeling is done in both space and time. But sometimes only a snapshot in time is needed.

Human Terrain Modeling

U.S. ground forces in Iraq and Afghanistan in the past few years have rediscovered and refined a type of static geospatial model that was used in the Vietnam War, though its use dates far back in history. Military forces now generally consider what they call “human terrain mapping” as an essential part of planning and conducting operations in populated areas.

In combating an insurgency, military forces have to develop a detailed model of the local situations that includes political, economic, and sociological inform- ation as well as military force information.

It involves acquiring the following details about each village and town:

  • The boundaries of each tribal area (with specific attention to where they adjoin or overlap)
  • Location and contact information for each sheik or village mukhtar and for government officials
  • Locations of mosques, schools, and markets
  • Patterns of activity such as movement into and out of the area; waking, sleeping, and shopping habits
  • Nearest locations and checkpoints of security forces
  • Economic driving forces including occupation and livelihood of inhabit- ants; employment and unemployment levels
  • Anti-coalition presence and activities
  • Access to essential services such as fuel, water, emergency care, and fire response
  • Particular local population concerns and issues

Human terrain mapping, or more correctly human terrain modeling, is an old intelligence technique.

Though Moses’s HUMINT mission failed because of poor analysis by the spies, it remains an excellent example of specific collection tasking as well as of the history of human terrain mapping.

1919 Paris Peace Conference

In 1917 President Woodrow Wilson established a study group to prepare materials for peace negotiations that would conclude World War I. He eventually tapped geographer Isaiah Bowman to head a group of 150 academics to prepare the study. It covered the languages, ethnicities, resources, and historical boundaries of Europe. With support from the American Geological Society, Bowman directed the production of over three hundred maps per week during January 1919.

The Tools of Human Terrain Modeling

Today, human terrain modeling is used extensively to support military operations in Syria, Iraq, and Afghanistan. Many tools have been developed to create and analyze such models. The ability to do human terrain mapping and other types of geospatial modeling has been greatly expanded and popularized by Google Earth and by Microsoft’s Virtual Earth. These geospatial modeling tools provide multiple layers of information.

This unclassified online material has a number of intelligence applications. For intelligence analysts, it permits planning HUMINT and COMINT operations. For military forces, it supports precise targeting. For terrorists, it facilitates planning of attacks.

Temporal Models

Pure temporal models are used less frequently than the dynamic geospatial models discussed next, because we typically want to observe activity in both space and time—sometimes over very short times. Timing shapes the consequences of planned events.

There are a number of different temporal model types; this chapter touches on two of them—timelines and pattern-of-life modeling and analysis.

Timelines

An opponent’s strategy often becomes apparent only when seemingly disparate events are placed on a timeline.

Event-time patterns tell analysts a great deal; they allow analysts to infer relationships among events and to examine trends. Activity patterns of a target network, for example, are useful in determining the best time to collect intelligence. An example is a plot of total telephone use over twenty-four hours—the plot peaks about 11 a.m., which is the most likely time for a per- son to be on the telephone.

Pattern-of-Life Modeling and Analysis

Pattern-of-life (POL) analysis is a method of modeling and understanding the behavior of a single person or group by establishing a re- current pattern of actions over time in a given situation. It has similarities to the concept of activity-based intelligence

 

Dynamic Geospatial Models

A dynamic variant of the geospatial model is the space-time model. Many activities, such as the movement of a satellite, a vehicle, a ship, or an aircraft, can best be shown spatially—as can population movements. A com- bination of geographic and time synthesis and analysis can show movement patterns, such as those of people or of ships at sea.

Dynamic geospatial modeling and analysis has been described using a number of terms. Three that are commonly used in intelligence are described in this section: movement intelligence, activity-based intelligence, and geographic profiling. Though they are similar, each has a somewhat different meaning. Dynamic modeling is also applied in understanding intelligence enigmas.

Movement Intelligence

Intelligence practitioners sometimes describe space-time models as movement intelligence, or “MOVINT” as if it were a collection “INT” instead of a target model. The name “movement intelligence” for a specialized intelligence product dates roughly to the wide use of two sensors for area surveillance.

One was the moving target indicator (MTI) capability for synthetic aperture radars. The other was the deployment of video cameras on intelligence collection platforms. MOVINT has been defined as “an intelligence gathering method by which images (IMINT), non-imaging products (MASINT), and signals (SIGINT) produce a movement history of objects of interest.”

Activity-Based Intelligence

Activity-based intelligence, or ABI, has been defined as “a discipline of intelligence where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, population, or area of interest.”

ABI is a form of situational awareness that focuses on interactions over time. It has three characteristics:

  • Raw intelligence information is constantly collected on activities in a given region and stored in a database for later metadata searches.
  • It employs the concept of “sequence neutrality,” meaning that material is collected without advance knowledge of whether it will be useful for any intelligence purpose.
  • It also relies on “data neutrality,” meaning that any source of intelligence may contribute; in fact, open source may be the most valuable.

ABI therefore is a variant of the target-centric approach, focused on the activity of a target (person, object, or group) within a specified target area. So it includes both spatial and temporal dimensions. At a higher level of complexity, it can include network relationships as well.

Though the term ABI is of recent origin and is tied to the development of surveillance methods for collecting intelligence, the concept of solving intelligence problems by monitoring activity over time has been ap- plied for decades. It has been the primary tool for dealing with geographic profiling and intelligence enigmas.

Geographic Profiling

Geographic profiling is a term used in law enforcement for geospatial modeling, specifically a space-time model, that supports serial violent crime or sexual crime investigations. Such crimes, when committed by strangers, are difficult to solve. Their investigation can produce hundreds of tips and suspects, resulting in the problem of information overload

Intelligence Enigmas

Geospatial modeling and analysis frequently must deal with unidentified facilities, objects, and activities. These are often referred to by the term intelligence enigmas. For such targets, a single image—a snapshot in time—is insufficient.

Summary

One of the most powerful combination models is the geospatial model, which combines all sources of intelligence into a visual picture (often on a map) of a situation. One of the oldest of analytic products, geospatial modeling today is the product of all-source analysis that can incorporate OSINT, IMINT, HUMINT, COMINT, and advanced technical collection methods.

Many GEOINT models are dynamic; they show temporal changes. This combination of geospatial and temporal models is perhaps the single most important trend in GEOINT. Dynamic GEOINT models are used to observe how a situation develops over time and to extrapolate future developments

 

Part II

The Estimative Process

12 Predictive Analysis

“Your problem is that you are not able to see things before they happen.”

Wotan to Fricka, in Wagner’s opera Die Walküre

Describing a past event is not intelligence analysis; it is reciting history. The highest form of intelligence analysis requires structured thinking that results in an estimate of what is likely to happen.

True intelligence analysis is always predictive.

 

The value of a model of possible futures is in the insights that it produces. Those insights prepare customers to deal with the future as it unfolds. The analyst’s contribution lies in the assessment of the forces that will shape future events and the state of the target mod- el. If an analyst accurately assesses the forces, she has served the intelligence customer well, even if the prediction derived from that assessment turns out to be wrong.

policymaking customers tend to be skeptical of predictive analysis unless they do it themselves. They believe that their own opinions about the future are at least as good as those of intelligence analysts. So when an analyst offers an estimate without a compelling supporting argument, he or she should not be surprised if the policymaker ignores it.

By contrast, policymakers and executives will accept and make use of predictive analysis if it is well reasoned, and if they can follow the analyst’s logical development. This implies that we apply a formal methodology, one that the customer can understand, so that he or she can see the basis for the conclusions drawn.

Former national security adviser Brent Scowcroft observed, “What intelligence estimates do for the policymaker is to remind him what forces are at work, what the trends are, and what are some of the possibilities that he has to consider.” Any intelligence assessment that does these things will be readily accepted.

Introduction to Predictive Analysis

Intelligence can usually deal with near-term developments. Extrapolation—the act of making predictions based solely on past observations—serves us reasonably well in the short term for situations that involve established trends and normal individual or organizational behaviors.

Adding to the difficulty, intelligence estimates can also affect the future that they predict. Often, the estimates are acted on by policymakers—sometimes on both sides.

The first step in making any estimate is to consider the phenomena that are involved, in order to determine whether prediction is even possible.

Convergent and Divergent Phenomena

In examining trends and possible future events, we use the same terminology: Convergent phenomena make prediction possible; divergent phenomena frustrate it.

a basic question to ask at the outset of any predictive attempt is, Does the principle of causation apply? That is, are the phenomena we are to examine and prepare estimates about governed by the laws of cause and effect?

A good example of a divergent phenomenon in intelligence is the coup d’état. Policy- makers often complain that their intelligence organizations have failed to warn of coups. But a coup event is conspiratorial in nature, limited to a handful of people, and dependent on the preservation of secrecy for its success.

If a foreign intelligence service knows of the event, then secrecy has been com- promised and the coup is almost certain to fail—the country’s internal security services will probably forestall it. The conditions that encourage a coup attempt can be assessed and the coup likelihood estimated by using probability theory, but the timing and likelihood of success are not “predictable.”

The Estimative Approach

The target-centric approach to prediction follows an analytic pattern long established in the sciences, in organizational planning, and in systems synthesis and analysis.

 

The synthesis and analysis process discussed in this chapter and the next is derived from an estimative approach that has been formalized in several professional disciplines. In management theory, the approach has several names, one of which is the Kepner-Tregoe Rational Management Process. In engineering, the formalization is called the Kalman Filter. In the social sciences, it is called the Box-Jenkins method. Although there are differences among them, all are techniques for combining complex data to create estimates. They all require combining data to estimate an entity’s present state and evaluating the forces acting on the entity to predict its future state.

This concept—to identify the forces acting on an entity, to identify likely future forces, and to predict the likely changes in old and new forces over time, along with some indicator of confidence in these judgments—is the key to successful estimation. It takes into ac- count redundant and conflicting data as well as the analyst’s confidence in these data.

The key is to start from the present target model (and preferably, also with a past target model) and move to one of the future models, using an analysis of the forces involved as a basis. Other texts on estimative analysis describe these forces as issues, trends, factors, or drivers. All those terms have the same meaning: They are the entities that shape the future.

The methodology relies on three predictive mechanisms: extrapolation, projection, and forecasting. Those components and the general approach are defined here; later in the chapter, we delve deeper into “how-to” details of each mechanism.

An extrapolation assumes that these forces do not change between the present and future states, a projection assumes they do change, and a forecast assumes they change and that new forces are added.

The analysis follows these steps:

  1. Determine at least one past state and the present state of the entity. In intelligence, this entity is the target model, and it can be a model of almost anything—a terrorist organization, a government, a clandestine trade network, an industry, a technology, or a ballistic missile.
  2. Determine the forces that acted on the entity to bring it to its present state.

These same forces, acting unchanged, would result in the future state shown as an extrapolation (Scenario 1).

  1. To make a projection, estimate the changes in existing forces that are likely to occur. In the figure, a decrease in one of the existing forces (Force 1) is shown as causing a projected future state that is different from the extrapolation (Scenario 2).
  2. To make a forecast, start from either the extrapolation or the projection and then identify the new forces that may act on the entity, and incorporate their effect. In the figure, one new force is shown as coming to bear, resulting in a forecast future state that differs from both the extrapolated and the projected future states (Scenario 3).
  3. Determine the likely future state of the entity based on an assessment of the forces. Strong and certain forces are weighed most heavily in this pre- diction. Weak forces, and those in which the analyst lacks confidence (high uncertainty about the nature or effect of the force), are weighed least.

The process is iterative.

In this figure, we are concerned with a target (technology, system, person, organization, country, situation, industry, or some combination) that changes over time. We want to describe or characterize the entity at some future point.

the basic analytic paradigm is to create a model of the past and present state of the target, followed by alternative models of its possible future states, usually created in scenario form.

A CIA assessment of Mikhail Gorbachev’s economic reforms in 1985–1987 correctly estimated that his proposed reforms risked “confusion, economic disruption, and worker discontent” that could embolden potential rivals to his power.17 This projection was based on assessing the changing forces in Soviet society along with the inertial forces that would resist change.

The process we’ve illustrated in these examples has many names—force field analysis and system dynamics are two.

for forecasting, the analyst must identify new forces that are likely to come into play. Most of the chapters that follow focus on identifying and measuring these forces.

An analyst can (wrongly) shape the outcome by concentrating on some forces and ignoring or downplaying the significance of others.

Force Analysis According to Sun Tzu

Factor or force analysis is an ancient predictive technique. Successful generals have practiced it in warfare for thousands of years, and one of its earliest known pro- ponents was Sun Tzu. He described the art of war as being controlled by five factors, or forces, all of which must be taken into ac- count in predicting the outcome of an engagement. He called the five factors Moral Law, Heaven, Earth, the Commander, and Method and Discipline. In modern terms, the five would be called social, environmental, geospatial, leadership, and organizational factors.

The simplest approach to both projection and forecasting is to do it qualitatively. That is, an analyst who is an expert in the subject area begins the process by answering the following questions:

  1. What forces have affected this entity (organization, situation, industry, technical area) over the past several years?19
  2. Which five or six forces had more im- pact than others?
  3. What forces are expected to affect this entity over the next several years?
  4. Which five or six forces are likely to have more impact than others?
  5. What are the fundamental differ- ences between the answers to ques- tions two and four?
  6. What are the implications of these differences for the entity being analyzed?

The answers to those questions shape the changes in direction of the extrapolation… At more sophisticated levels of qualitative synthesis and analysis, the analyst might examine adaptive forces (feedback forces) and their changes over time.

High-Impact/Low-Probability Analysis

Projections and forecasts focus on the most likely outcomes. But customers also need to be aware of the unlikely outcomes that could have severe adverse effects on their interests.

 

The CIA’s tradecraft manual describes the analytic process as follows:

  • Define the high-impact outcome clearly. This definition will justify examining what most analysts believe to be a very unlikely development.
  • Devise one or more plausible explanations for or “pathways” to the low-probability outcome. This should be as precise as possible, as it can help identify possible indicators for later monitoring.
  • Insert possible triggers or changes in momentum if appropriate. These can be natural disasters, sudden health problems of key leaders, or new eco- nomic or political shocks that might have occurred historically or in other parts of the world.
  • Brainstorm with analysts having a broad set of experiences to aid the development of plausible but unpredictable triggers of sudden change.
  • Identify for each pathway a set of indicators or “observables” that would help you anticipate that events were beginning to play out this way.
  • Identify factors that would deflect a bad outcome or encourage a positive outcome.

The product of high-impact/low-probability analysis is a type of scenario called a demonstration scenario…

Two important types of bias can exist in predictive analysis: pattern, or confirmation, bias—looking for evidence that confirms rather than rejects a hypothesis; and heuristic bias—using inappropriate guidelines or rules to make predictions.

Two points are worth noting at the beginning of the discussion:

  • One must make careful use of the tools in synthesizing the model, as some will fail when applied to prediction. Expert opinion, for example, is often used in creating a target model; but experts’ biases, egos, and narrow focuses can interfere with their pre- dictions. (A useful exercise for the skeptic is to look at trade press or technical journal predictions that were made more than ten years ago that turned out to be way off base. Stock market predictions and popular science magazine predictions of automobile designs are particularly entertaining.)
  • Time constraints work against the analyst’s ability to consistently employ the most elaborate predictive techniques. Veteran analysts tend to use analytic techniques that are relatively fast and intuitive. They can view scenario development, red teams (teams formed to take the opponent’s perspective in planning or assessments), competing hypotheses, and alternative analysis as being too time-consuming to use in ordinary circumstances. An analyst has to guard against using just extrapolation because it is the fastest and easiest to do. But it is possible to use shortcut versions of many predictive techniques and sometimes the situation calls for that. This chapter and the following one contain some examples of shortcuts.

Extrapolation

An extrapolation is a statement, based only on past observations, of what is expected to happen. Extrapolation is the most conservative method of prediction. In its simplest form, an extrapolation, using historical performance as the basis, extends a linear curve on a graph to show future direction.

Extrapolation also makes use of correlation and regression techniques. Correlation is a measure of the degree of association between two or more sets of data, or a measure of the degree to which two variables are related. Regression is a technique for predicting the value of some unknown variable based only on information about the current values of other variables. Regression makes use of both the degree of association among variables and the mathematical function that is determined to best describe the relationships among variables.

the more bureaucracy and red tape involved in doing business, the more corruption is likely in the country.

Projection

Before moving on to projection and forecasting, let’s reinforce the differentiation from extrapolation. An extrapolation is a simple assertion about what a future scenario will look like. In contrast, a projection or a forecast is a probabilistic statement about some future scenario.

Projection is more reliable than extrapolation. It predicts a range of likely futures based on the assumption that forces that have operated in the past will change, whereas extrapolation assumes the forces do not change.

Projection makes use of two major analytic techniques. One technique, force analysis, was discussed earlier in this chapter. After a qualitative force analysis has been completed, the next technique is to apply probabilistic reasoning to it. Probabilistic reasoning is a systematic attempt to make subjective estimates of probabilities more explicit and consistent. It can be used at any of several levels of complexity (each successive level of sophistication adds new capability and completeness). But even the simplest level of generating alternatives, discussed next, helps to prevent premature closure and adds structure to complicated problems.

Generating Alternatives

The first step to probabilistic reasoning is no more complicated than stating formally that more than one outcome is possible. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. One can generate alternatives simply by listing all possible outcomes to the issue under consideration. Remember that the possible outcomes can be defined as alternative scenarios.

The mere act of generating a complete, detailed list often provides a useful perspective on a problem.

Influence Trees or Diagrams

A list of alternative outcomes is the first step. A simple projection might not go beyond this level. But for more rigorous analysis, the next step typically is to identify the things that influence the possible outcomes and indicate the interrelationship of these influences. This process is frequently done by using an influence tree.

let’s assume that an analyst wants to assess the outcome of an ongoing African insurgency movement. There are three obvious possible outcomes: The insur- gency will be crushed, the insurgency will succeed, or there will be a continuing stale- mate. Other outcomes may be possible, but we can assume that they are so unlikely as not to be worth including. The three outcomes for the influence diagram are as follows:

  • Regime wins
  • Insurgency wins
  • Stalemate

The analyst now describes those forces that will influence the assessment of the relative likelihoods of each outcome. For instance, the insurgency’s success may depend on whether economic conditions improve, remain the same, or become worse during the next year. It also may depend on the success of a new government poverty relief program. The assumptions about these “driver” events are often described as linchpin premises in U.S. intelligence practice, and these assumptions need to be made explicit.

Having established the uncertain events that influence the outcome, the analyst proceeds to the first stage of an influence tree.

The thought process that is invoked when generating the list of influencing events and their outcomes can be useful in several ways. It helps identify and document factors that are relevant to judging whether an alternative outcome is likely to occur.

The audit trail is particularly useful in showing colleagues what the analyst’s thinking has been, especially if he desires help in upgrading the diagram with things that may have been overlooked. Software packages for creating influence trees allow the inclusion of notes that create an audit trail.

In the process of generating the alternative lists, the analyst must address the issue of whether the event (or outcome) being listed actually will make a difference in his assessment of the relative likelihood of the outcomes of any of the events being listed.

For instance, in the economics example, if the analyst knew that it would make no difference to the success of the insurgency whether economic conditions improved or remained the same, then there would be no need to differentiate these as two separate outcomes. The analyst should instead simplify the diagram.

The second question, having to do with additional influences not yet shown on the diagram, allows the analyst to extend this pictorial representation of influences to whatever level of detail is considered necessary. Note, however, that the analyst should avoid adding unneeded layers of detail.

Probabilistic reasoning is used to evaluate outcome scenarios.

This influence tree approach to evaluating possible outcomes is more convincing to customers than would be an unsupported ana- lytic judgment about the prospects for the insurgency. Human beings tend to do poorly at such complex assessments when they are approached in a totally unaided, subjective manner; that is, by the analyst mentally combining the force assessments in an un- structured way.

Influence Nets

Influence net modeling is an alternative to the influence tree.

To create an influence net, the analyst defines influence nodes, which depict events that are part of cause-effect relationships within the target model. The analyst also creates “influence links” between cause and effect that graphically illustrate the causal relation between the connected pair of events.

The influence can be either positive (sup- porting a given decision) or negative (decreasing the likelihood of the decision), as identified by the link “terminator.” The terminator is either an arrowhead (positive influence) or a filled circle (negative influence). The resulting graphical illustration is called the “influence net topology.”

 

Making Probability Estimates

Probabilistic projection is used to predict the probability of future events for some time- dependent random process… A number of these probabilistic techniques are used in industry for projection.

Two techniques that we use in intelligence analysis are as follows:

  • Point and interval estimation. This method attempts to describe the probability of outcomes for a single event. An example would be a country’s economic growth rate, and the event of concern might be an eco- nomic depression (the point where the growth rate drops below a certain level).
  • Monte Carlo simulation. This method simulates all or part of a process by running a sequence of events repeatedly, with random combinations of values, until sufficient statistical material is accumulated to determine the probability distribution of the outcome.

Most of the predictive problems we deal with in intelligence use subjective probability estimates. We routinely use subjective estimates of probabilities in dealing with broad issues for which no objective estimate is feasible.

Sensitivity Analysis

When a probability estimate is made, it is usually worthwhile to conduct a sensitivity analysis on the result. For example, the occurrence of false alarms in a security system can be evaluated as a probabilistic process.

Forecasting

Projections often work out better than extrapolations over the medium term. But even the best-prepared projections often seem very conservative when compared to reality years later. New political, economic, social, technological, or military developments will create results that were not foreseen even by experts in a field.

Forecasting uses many of the same tools that projection relies on—force analysis and probabilistic reasoning, for example. But it presents a stressing intellectual challenge, because of the difficulty in identifying and assessing the effect of new forces.

The development of alternative futures is essential for effective strategic decision-making. Since there is no single predictable future, customers need to formulate strategy within the context of alternative future states of the target. To this end, it is necessary to develop a model that will make it possible to show systematically the interrelationships of the individually forecast trends and events.

A forecast is not a blueprint of the future, and it typically starts from extrapolations or projections. Forecasters then must expand their scope to admit and juggle many additional forces or factors. They must examine key technologies and developments that are far afield but that nevertheless affect the subject of the forecast.

The Nonlinear Approach to Forecasting

Obviously, a forecasting methodology requires analytic tools or principles. But for any forecasting methodology to be successful, analysts who have significant understanding of many PMESII factors and the ability to think about issues in a nonlinear fashion are also required.

Futuristic thinking examines deeper forces and flows across many disciplines that have their own order and pattern. In predictive analysis, we may seem to wander about, making only halting progress toward the solution. This nonlinear process is not a flaw; rather it is the mark of a natural learning process when dealing with complex and nonlinear matters.

The sort of person who can do such multidisciplinary analysis of what is likely to happen in the future has a broad under- standing of the principles that cause a physical phenomenon, a chemical reaction, or a social reaction to occur. People who are multidisciplinary in their knowledge and thinking can pull together concepts from several fields and assess political, economic, and social, as well as technical, factors. Such breadth of understanding recognizes the similarity of principles and the underlying forces that make them work. It might also be called “applied common sense,” but unfortunately it is not very common. Analysts instead tend to specialize, because in-depth expertise is highly valued by both intelligence management and the intelligence customer.

The failure to do multidisciplinary analysis is often tied closely to mindset.

Techniques and Analytic Tools of Forecasting

Forecasting is based on a number of assumptions, among them the following:

  • The future cannot be predicted, but by taking explicit account of uncertainty, one can make probabilistic forecasts.
  • Forecasts must take into account possible future developments in such areas as organizational changes, demography, lifestyles, technology, economics, and regulation.

For policymakers and executives, the aim of defining alternative futures is to try to determine how to create a better future than the one that would materialize if we merely keep doing what we’re currently doing. Intelligence analysis contributes to this definition of alternative futures, with emphasis on the likely actions of others—allies, neutrals, and opponents.

Forecasting starts through examination of the changing political, military, economic, and social environments.

We first select issues or concerns that require attention. These issues and concerns have component forces that can be identified using a variant of the strategies-to-task methodology.

If the forecast is done well, these scenarios stimulate the customer of intelligence—the executive—to make decisions that are appropriate for each scenario. The purpose is to help the customer make a set of decisions that will work in as many scenarios as possible.

Evaluating Forecasts

Forecasts are judged on the following criteria:

  • Clarity. Can the customer under- stand the forecast and the forces involved? Is it clear enough to be useful?
  • Credibility. Do the results make sense to the customer? Do they appear valid on the basis of common sense?
  • Plausibility. Are the results consistent with what the customer knows about the world outside the scenario and how this world really works or is likely to work in the future?
  • Relevance. To what extent will the forecasts affect the successful achievement of the customer’s mission?
  • Urgency. To what extent do the forecasts indicate that, if action is required, time is of the essence in developing and implementing the necessary changes?
  • Comparative advantage. To what extent do the results provide a basis for customer decision-making, com- pared with other sources available to the customer?
  • Technical quality. Was the process that produced the forecasts technically sound? Are the alternative forecasts internally consistent?

 

A “good” forecast is one that meets all or most of these criteria. A “bad” forecast is one that does not. The analyst has to make clear to customers that forecasts are transitory and need constant adjustment to be helpful in guiding thought and action.

Customers typically have a number of complaints about forecasts. Common complaints are that the forecast is obvious; it states nothing new; it is too optimistic, pessimistic, or naïve; or it is not credible because it overlooks obvious trends, events, causes, or consequences. Such objections are actually desirable; they help to improve the product. There are a number of appropriate responses to these objections: If something important is missing, add it. If something unimportant is included, get rid of it. If the forecast seems either obvious or counterintuitive, probe the underlying logic and revise the forecast as necessary.

Summary

Intelligence analysis, to be useful, must be predictive. Some events or future states of a target are predictable because they are driven by convergent phenomena. Some are not predictable because they are driven by divergent phenomena.

The analysis product—a demonstration scenario—describes how such a development might plausibly start and identifies its consequences. This provides indicators that can be monitored to warn that the improbable event is actually happening.

For analysts predicting systems developments as many as five years into the future, extrapolations work reasonably well; for those looking five to fifteen years into the future, projections usually fare better.

13 Estimative Forces

Estimating is what you do when you don’t know.

The factors or forces that have to be considered in estimation—primarily PMESII factors—vary from one intelligence problem to another. I do not attempt to catalog them in this book; there are too many. But an important aspect of critical thinking, discussed earlier, is thinking about the underlying forces that shape the future. This chapter deals with some of those forces.

The CIA’s tradecraft manual describes an analytic methodology that is appropriate for identifying and assessing forces. Called “outside in” thinking, it has the objective of identifying the critical external factors that could influence how a given situation will develop. According to the tradecraft manual, analysts should develop a generic description of the problem or the phenomenon under study. Then, analysts should:

  • List all the key forces (social, technological, economic, environmental, and political) that could have an impact on the topic, but over which one can exert little influence (e.g., globalization, social stress, the Internet, or the global economy).
  • Focus next on key factors over which an actor or policymaker can exert some influence. In the business world this might be the market size, customers, the competition, suppliers or partners; in the government do- main it might include the policy actions or the behavior of allies or adversaries.
  • Assess how each of these forces could affect the analytic problem.
  • Determine whether these forces actually do have an impact on the particular issue based on the available evidence.

 

Political and military factors are often the focus of attention in assessing the likely out- come of conflicts. But the other factors can turn out to be dominant. In the developing conflict between the United States and Japan in 1941, Japan had a military edge in the Pacific. But the United States had a substantial edge in these factors:

  • Political. The United States could call on a substantial set of allies. Japan had Germany and Italy.
  • Economy. Japan lacked the natural resources that the United States and its allies controlled.
  • Social. The United States had almost twice the population of Japan. Japan initially had an edge in the solidarity of its population in support of the government, but that edge was matched within the United States after Pearl Harbor.
  • Infrastructure. The U.S. manufacturing capability far exceeded that of Japan and would be decisive in a prolonged conflict (as many Japanese military leaders foresaw).
  • Information. The prewar information edge favored Japan, which had more control of its news media, while a segment of the U.S. media strongly opposed involvement in war. That edge also evaporated after December 7, 1941.

Inertia

One force that has broad implications is inertia, the tendency to stay on course and resist change.

It has been observed that: “Historical inertia is easily underrated . . . the historical forces molding the outlook of Americans, Russians, and Chinese for centuries before the words capitalism and communism were invented are easy still to overlook.”

Opposition to change is a common reason for organizations’ coming to rest. Opposition to technology in general, for example, is an inertial matter; it results from a desire of both workers and managers to preserve society as it is, including its institutions and traditions.

A common manifestation of the law of inertia is the “not-invented-here,” or NIH, factor, in which the organization opposes pressures for change from the outside.

But all societies resist change to a certain extent. The societies that succeed seem able to adapt while preserving that part of their heritage that is useful or relevant.

From an analyst’s point of view, inertia is an important force in prediction. Established factories will continue to produce what they know how to produce. In the automobile industry, it is no great challenge to predict that next year’s autos will look much like this year’s. A naval power will continue to build ships for some time even if a large navy ceases to be useful.

Countervailing Forces

All forces are likely to have countervailing or resistive forces that must be considered.

The principle is summarized well by another of Newton’s laws of physics: For every action there is an equal and opposite reaction.

Applications of this principle are found in all organizations and groups, commercial, national, and civilizational. As Samuel P. Huntington noted, “[W]e know who we are . . . often only when we know who we are against.”

A predictive analysis will always be incomplete unless it identifies and assesses opposing forces. All forces eventually meet counterforces. An effort to expand free trade inevitably arouses protectionist reactions. One country’s expansion of its military strength always causes its neighbors to react in some fashion.

 

Counterforces need not be of the same nature as the force they are countering. A prudent organization is not likely to play to its opponent’s strengths. Today’s threats to U.S. national security are asymmetric; that is, there is little threat of a conventional force-on-force engagement by an opposing military, but there is a threat of an unconventional yet lethal attack by a loosely organized terrorist group, as the events of September 11, 2001, and more recently the Boston Marathon bombing, demonstrated. Asymmetric counterforces are common in industry as well. Industrial organizations try to achieve cost asymmetry by using defensive tactics that have a large favorable cost differential between their organization and that of an opponent.

Contamination

Contamination is the degradation of any of the six factors—political, military, economic, social, infrastructure, or information (PMESII factors)—through an infection-like process. Corruption is a form of political and social contamination. Funds laundering and counterfeiting are forms of economic contamination. The result of propaganda is information contamination.

Contamination phenomena can be found throughout organizations as well as in the scientific and technical disciplines. Once such an infection starts, it is almost impossible to eradicate.

Contamination phenomena have analogies in the social sciences, organization theory, and folklore.

At some point in organizations, contamination can become so thorough that only drastic measures will help—such as shutting down the glycerin plant or rebuilding the microwave tube plant. Predictive intelligence has to consider the extent of such social contamination in organizations, because contamination is a strong restraining force on an organization’s ability to deal with change.

The effects of social contamination are hard to measure, but they are often highly visible.

The contamination phenomenon has an interesting analogy in the use of euphemism in language. It is well known that if a word has or develops negative associations, it will be replaced by a succession of euphemisms. Such words have a half-life, or decay rate, that is shorter as the word association be- comes more negative. In older English, the word stink meant “to smell.” The problem is that most of the strong impressions we get from scents are unpleasant ones; so each word for olfactory senses becomes contaminated over time and must be replaced. Smell has a generally unpleasant connotation now

The renaming of a program or project is a good signal that the program or project is in trouble—especially in Washington, D.C., but the same rule holds in any culture.

Synergy

predictive intelligence analysis almost always requires multidisciplinary understanding. Therefore, it is essential that the analysis organization’s professional development program cultivate a professional staff that can understand a broad range of concepts and function in a multidisciplinary environment. One of the most basic concepts is that of synergy: The whole can be more than the sum of its parts due to interactions among the parts. Synergy is therefore, in some respects, the opposite of the countervailing forces discussed earlier.

Synergy is not really a force or factor as much as a way of thinking about how forces or factors interact. Synergy can result from cooperative efforts and alliances among organizations (synergy on a large scale).

Netwar is an application of synergy.

In electronics warfare, it is now well known that a weapons system may be unaffected by a single countermeasure; however, it may be degraded by a combination of countermeasures, each of which fail individually to defeat it. The same principle applies in a wide range of systems and technology developments: The combination may be much greater than the sum of the components taken individually.

Synergy is the foundation of the “swarm” approach that military forces have applied for centuries—the coordinated application of overwhelming force.

In planning a business strategy against a competitive threat, a company will often put in place several actions that, each taken alone, would not succeed. But the combination can be very effective. As a simple example, a company might use sever- al tactics to cut sales of a competitor’s new product: start rumors of its own improved product release, circulate reports on the defects or expected obsolescence of the competitor’s product, raise buyers’ costs of switching from its own to the competitor’s product, and tie up suppliers by using exclusive contracts. Each action, taken separately, might have little impact, but the synergy—the “swarm” effect of the actions taken in combination—might shatter the competitor’s market.

Feedback

In examining any complex system, it is important for the analyst to evaluate the system’s feedback mechanism. Feedback is the mechanism whereby the system adapts—that is, learns and changes itself. The following discussion provides more detail about how feedback works to change a system.

Many of the techniques for prediction de- pend on the assumption that the process being analyzed can be described, using systems theory, as a closed-loop system. Under the mathematical theory of such systems, feedback is a controlling force in which the out- put is compared with the objective or standard, and the input process is corrected as necessary to bring the output toward a desired state

The feedback function therefore determines the behavior of the total system over time. Only one feedback loop is shown in the figure, but many feedback loops can exist, and usually do in a complex system.

Notes on Countering Threat Networks

Accession Number: AD1025082

Title: Countering Threat Networks

Descriptive Note: Technical Report

Corporate Author: JOINT STAFF WASHINGTON DC WASHINGTON

Abstract: 

This publication has been prepared under the direction of the Chairman of the Joint Chiefs of Staff CJCS. It sets forth joint doctrine to govern the activities and performance of the Armed Forces of the United States in joint operations, and it provides considerations for military interaction with governmental and nongovernmental agencies, multinational forces, and other interorganizational partners. It provides military guidance for the exercise of authority by combatant commanders and other joint force commanders JFCs, and prescribes joint doctrine for operations and training. It provides military guidance for use by the Armed Forces in preparing and executing their plans and orders. It is not the intent of this publication to restrict the authority of the JFC from organizing the force and executing the mission in a manner the JFC deems most appropriate to ensure unity of effort in the accomplishment of objectives. The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks CTN consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat networks motivation and objectives is required to effectively counter its efforts.

 

Descriptors: Threats, military organizationsintelligence collection

 

Distribution Statement: APPROVED FOR PUBLIC RELEASE

 

Link to Article: https://apps.dtic.mil/sti/citations/AD1025082

 

Notes

Scope

This publication provides joint doctrine for joint force commanders and their staffs to plan, execute, and assess operations to identify, neutralize, disrupt, or destroy threat networks.

Introduction

The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks (CTN) consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat network’s motivation and objectives is required to effectively counter its efforts.

Policy and Strategy

CTN planning and operations require extensive coordination as well as innovative, cross-cutting approaches that utilize all instruments of national power. The national military strategy describes the need of the joint force to operate in this complex environment.

Challenges of the Strategic Security Environment

CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply.

The Strategic Approach

The groundwork for successful countering threat networks activities starts with information and intelligence to develop an understanding of the operational environment and the threat network.

Military engagement, security cooperation, and deterrence are just some of the activities that may be necessary to successfully counter threat networks without deployment of a joint task force.

Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants.

Threat Network Fundamentals

Threat Network Construct

A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or operational environment (OE) that can be targeted for action. A link is a behavioral, physical, or functional relationship between nodes. Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying centers of gravity (COGs), networks, and cells the joint force commander (JFC) may wish to influence or change during an operation.

Network Analysis

Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes. The political, military, economic, social, information, and infrastructure systems perspective is a useful starting point for analysis of threat networks. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

Determining and Analyzing Node-Link Relationships

Social network analysis provides a method that helps the JFC and staff understand the relevance of nodes and links. The strength or intensity of a single link can be relevant to determining the importance of the functional relationship between nodes and the overall significance to the larger system. The number and strength of nodal links within a set of nodes can be indicators of key nodes and a potential COG.

Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals. Examples of cells include: operational, logistical, training, communications, financial, and WMD proliferation cells.

Networked Threats and Their Impact on the Operational Environment

Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

Threat Network Characteristics

Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations.

Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Networks possess many characteristics important to their success and survival, such as flexible command and control structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt.

Network Engagement

Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.

Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups. These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose.

Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. To successfully accomplish mission goals the JFC should equally consider the impact of actions on multinational and friendly forces, local population, criminal enterprises, as well as the adversary.

Identify a Threat Network

Threat networks often attempt to remain hidden. By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. A thorough joint intelligence preparation of the operational environment (JIPOE) product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

Planning to Counter Threat Networks

Joint Intelligence Preparation of the Operational Environment and Threat Networks

JIPOE is the first step in identifying the essential elements that constitute the OE and is used to plan and conduct operations against threat networks. The focus of the JIPOE analysis for threat networks is to help characterize aspects of the networks.

Understanding the Threat’s Network

To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate critical factors analysis (CFA). CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning.

Targeting Evaluation Criteria

A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis. The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors: network affiliations, criticality, accessibility, recuperability, vulnerability, effect, and recognizability.

Countering Threat Networks Through the Planning of Phases

JFCs may plan and conduct CTN activities throughout all phases of a given operation. Upon gaining an understanding of the various threat networks in the OE through the joint planning process (JPP), JFCs and their staffs develop a series of prudent (feasible, suitable, and acceptable) CTN actions to be executed in conjunction with other phased activities.

Activities to Counter Threat Networks

Targeting Threat Networks

JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, critical capabilities (network’s functions), and critical requirements for the network. Joint force targeting efforts should employ a comprehensive approach, leveraging military force and civil agency capabilities that keep continuous pressure on multiple nodes and links of the network’s structure.

Desired Effects on Networks

When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The selection of effects desired on a network is conducted as part of target selection, which includes the consideration of capabilities to employ that was identified during capability analysis of the joint targeting cycle.

Targeting

CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

Lines of Effort by Phase

During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network. However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE.

Theater Concerns in Countering Threat Networks

Many threat networks are transnational, recruiting, financing, and operating on a global basis. Theater commanders need to be aware of the relationships among these networks and identify the basis for their particular connection to a geographic combatant commander’s area of responsibility.

Operational Approaches to Countering Threat Networks

There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple line of operations and lines of effort may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

Assessments

Assessment of Operations to Counter Threat Networks

CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for human intelligence.

Operation Assessment

CTN activities may require assessing multiple measures of effectiveness (MOEs) and measures of performance (MOPs), depending on threat network activity. The assessment process provides a feedback mechanism to the JFC to provide guidance and direction for future operations and targeting efforts against threat networks.

Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate. In conducting each of these activities, assessors must be linked to JPP, understand the operation plan, and inform the intelligence process as to what information is required to support indicators, MOEs, and MOPs. In assessing CTN operations, quantitative data and analysis will inform assessors.

CHAPTER I

OVERVIEW

“The emergence of amorphous, adaptable, and networked threats has far-reaching implications for the US national security community. These threats affect DOD [Department of Defense] priorities and war fighting strategies, driving greater integration with other departments and agencies performing national security missions, and create the need for new organizational concepts and decision- making paradigms. The impacts are likely to influence defense planning for years to come.”

Department of Defense Counternarcotics and Global Threats Strategy, April 2011

Threat networks are those whose size, scope, or capabilities threaten US interests. These networks may include the underlying informational, economic, logistical, and political components to enable these networks to function. These threats create a high level of uncertainty and ambiguity in terms of intent, organization, linkages, size, scope, and capabilities. These threat networks jeopardize the stability and sovereignty of nation-states, including the US.

They tend to operate among civilian populations and in the seams of society and may have components that are recognized locally as legitimate parts of society. Collecting information and intelligence on these networks, their nodes, links, and affiliations is challenging, and analysis of their strengths, weaknesses, and centers of gravity (COGs) differs greatly from traditional nation- state adversaries.

  1. Threat networks are part of the operational environment (OE). These networks utilize existing networks and may create new networks that seek to move money, people, information, and goods for the benefit of the network.

Not all of these interactions create instability and not all networks are a threat to the joint force and its mission. While some societies may accept a certain degree of corruption and criminal behavior as normal, it is never acceptable for these elements to develop networks that begin to pose a threat to national and regional stability. When a network begins to pose a threat, action should be considered to counter the threat.

This doctrine will focus on those networks that do present a threat with an understanding that friendly, neutral, and threat networks overlap and share nodes and links. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Some politically or ideologically based networks may avoid open confrontation with US forces; nevertheless, these networks may threaten mission success. Their activities may include spreading ideology, moving money, moving supplies (including weapons and fighters), human trafficking, drug smuggling, information relay, or acts of terrorism toward the population or local governments. Threat networks may be local, regional, or international and a threat to deployed joint forces and the US homeland.

  1. Understandingathreatnetwork’smotivationandobjectivesisrequiredtoeffectively counter its efforts. The issues that drive a network and its ideology should be clearly understood. For example, they may be driven by grievances, utopian ideals, power, revenge over perceived past wrongs, greed, or a combination of these.
  2. CTN is one of three pillars of network engagement that includes partnering with friendly networks and engaging with neutral networks in order to attain the commander’s desired military end state within a complex OE. It consists of activities to pressure threat networks or mitigate their adverse effects. These activities normally occur continuously and simultaneously at multiple levels (tactical, operational, and strategic) and may employ lethal and/or nonlethal capabilities in a direct or indirect manner. The most effective operations pressure and influence elements of these networks at multiple fronts and target multiple nodes and links.

The networks found in the OE may be simple or complex and must be identified and thoroughly analyzed. Neither all threats nor all elements of their supporting networks can be defeated, particularly if they have a regional or global presence. Certain elements of the network can be deterred, other parts neutralized, and some portions defeated. Engaging these threats through their supporting networks is not an adjunct or ad hoc set of operations and may be the primary mission of the joint force. It is not a stand-alone operation planned and conducted separately from other military operations. CTN should be fully integrated into the joint operational design, joint intelligence preparation of the operational environment (JIPOE), joint planning process (JPP), operational execution, joint targeting process, and joint assessments.

  1. Threat networks are often the most complex adversaries that exist within the OEs and frequently employ asymmetric methods to achieve their objectives. Disrupting their global reach and ability to influence events far outside of a specific operational area requires unity of effort across combatant commands (CCMDs) and all instruments of national power.

Joint staffs must realize that effectively targeting threat networks must be done in a comprehensive manner. This is accomplished by leveraging the full spectrum of capabilities available within the joint force commander’s (JFC’s) organization, from intergovernmental agencies, and/or from partner nations (PNs).

  1. Policy and Strategy
  2. DOD strategic guidance recognizes the increasing interconnectedness of the international order and the corresponding complexity of the strategic security environment.

Threat networks and their linkages transcend geographic and functional CCMD boundaries.

  1. CCDRs must be able to employ a joint force to work with interagency and interorganizational security partners in the operational area to shape, deter, and disrupt threat networks. They may employ a joint force with PNs to neutralize and defeat threat networks.
  2. CCDRs develop their strategies by analyzing all aspects of the OE and developing options to set conditions to attain strategic end states. They translate these options into an integrated set of CCMD campaign activities described in CCMD campaign and associated subordinate and supporting plans. CCDRs must understand the OE, recognize nation-state use of proxies and surrogates, and be vigilant to the dangers posed by super-empowered threat networks. Super-empowered threat networks are networks that develop or obtain nation-state capabilities in terms of weapons, influence, funding, or lethal aid.

In combination with US diplomatic, economic, and informational efforts, the joint force must leverage partners and regional allies to foster cooperation in addressing transnational challenges.

  1. Challenges of the Strategic Security Environment
  2. The strategic security environment is characterized by uncertainty, complexity, rapid change, and persistent conflict. Advances in technology and information have facilitated individual non-state actors and networks to move money, people, and resources, and spread violent ideology around the world. Non-state actors are able to conduct activities globally and nation-states leverage proxies to launch and maintain sustained campaigns in remote areas of the world.

Alliances, partnerships, cooperative arrangements, and inter-network conflict may morph and shift week-to-week or even day-to- day. Threat networks or select components often operate clandestinely. The organizational construct, geographical location, linkages, and presence among neutral or friendly populations are difficult to detect during JIPOE, and once a rudimentary baseline is established, ongoing changes are difficult to track. This makes traditional intelligence collection and analysis, as well as operations and assessments, much more challenging than against traditional military threats.

  1. Deterring threat networks is a complex and difficult challenge that is significantly different from classical notions of deterrence. Deterrence is most classically thought of as the threat to impose such high costs on an adversary that restraint is the only rational conclusion. When dealing with violent extremist organizations and other threat networks, deterrence is likely to be ineffective due to radical ideology, diffuse organization, and lack of ownership of territory.

due to the complexity of deterring violent extremist organizations, flexible approaches must be developed according to a network’s ideology, organization, sponsorship, goals, and other key factors to clearly communicate that the targeted action will not achieve the network’s objectives.

  1. CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply. These networked threats transcend operational areas, areas of influence, areas of interest, and the information environment (to include cyberspace [network links and nodes essential to a particular friendly or adversary capability]). The US military is one of the instruments of US national power that may be employed in concert with interagency, international, and regional security partners to counter threat networks.
  2. Threat networks have the ability to remotely plan, finance, and coordinate attacks through global communications (to include social media), transportation, and financial networks. These interlinked areas allow for the high-speed, high-volume exchange of ideas, people, goods, money, and weapons.

“Terrorists and insurgents increasingly are turning to TOC [transnational organized crime] to generate funding and acquire logistical support to carry out their violent acts. While the crime-terror[ist] nexus is still mostly opportunistic, this nexus is critical nonetheless, especially if it were to involve the successful criminal transfer of WMD [weapons of mass destruction] material to terrorists or their penetration of human smuggling networks as a means for terrorists to enter the United States.”

Strategy to Combat Transnational Organized Crime, July 2011

using the global communications network, threat networks have demonstrated their ability to recruit like- minded individuals from outside of their operational area and have been successful in recruiting even inside the US and PNs. Many threat networks have mastered social media and tapped into the proliferation of traditional and nontraditional news media outlets to create powerful narratives, which generate support and sympathy in other countries. Cyberspace is equally as important to the threat network as physical terrain. Future operations will require the ability to monitor and engage threat networks within cyberspace, since this provides them an opportunity to coordinate sophisticated operations that advance their interests.

  1. Threat Networks and Levels of Warfare
  2. The purpose of CTN activities is to shape the security environment, deter aggression, provide freedom of maneuver within the operational area and its approaches, and, when necessary, defeat threat networks.

Supporting activities may include training, use of military equipment, subject matter expertise, cyberspace operations, information operations (IO) (use of information-related capabilities [IRCs]), military information support operations (MISO), counter threat finance (CTF), interdiction operations, raids, or civil-military operations.

In nearly all cases, diplomatic efforts, sanctions, financial pressure, criminal investigations, and intelligence community activities will complement military operations.

  1. Threat networks and their supporting network capabilities (finance, logistics, smuggling, command and control [C2], etc.) will present challenges to the joint force at the tactical, operational, and strategic levels due to their adaptive nature to conditions in the OE. Figure I-1 depicts some of the threat networks that may be operating in the OE and their possible impact on the levels of warfare.

Complex alliances between threat, neutral, and friendly networks may vary at each level, by agency, and in different geographic areas in terms of their membership, composition, goals, resources, strengths, and weaknesses. Strategically they may be part of a larger ideological movement at odds with several regional governments, have regional aspirations for power, or oppose the policies of nations attempting to achieve military stability in a geographic region.

Tactically, there may be local alliances with criminal networks, tribes, or clans that may not be ideologically aligned with one another, but could find common cause in opposing joint force operations in their area or harboring grievances against the host nation (HN) government. Analysis will be required for each level of warfare and for each network throughout the operational area. This analysis should be aligned with analysis from intelligence community agencies and international partners that often inject critical information that may impact joint planning and operations.

  1. The Strategic Approach
  2. The groundwork for successful CTN activities starts with information and intelligence to develop an understanding of the OE and the threat network.
  3. Current operational art and operational design as described within JPP is applicable to CTN. Threat networks tend to be difficult to collect intelligence on, analyze, and understand. Therefore, several steps within the operational approach methodology outlined in JP 5-0, Joint Planning, such as understanding the OE and defining the problem may require more resources and time.

JP 2-01.3, Joint Intelligence Preparation of the Operational Environment, provides the template for this process used to analyze all relevant aspects of the OE. Within operational design, determining the strategic, operational, and tactical COGs and decisive points of multiple threat networks will be more challenging than analyzing a traditional military force…

  1. Strategic and operational approaches require greater interagency coordination. This is critical for achieving unity of effort against threat network critical vulnerabilities (CVs) (see Chapter II, “Threat Network Fundamentals”). When analyzing networks, there will never be a single COG. The identification of the factors that comprise the COG(s) for a network will still require greater analysis, since each individual of the network may be motivated by different aspects. For example, some members may join a network for ideological reasons, while others are motivated by monetary gain. This aspect must be understood when analyzing human networks.
  2. Threat networks will adapt rapidly and sometimes “out of view” of intelligence collection efforts.

Intelligence sharing… must be complemented by integrated planning and execution to achieve the optimal operational tempo to defeat threats. Traditionally defined geographic operational areas, roles, responsibilities, and authorities often require greater cross-area coordination and adaptation to counter threat networks. Unity of effort seeks to synchronize understanding of and actions against a group’s or groups’ political, military, economic, social, information, and infrastructure (PMESII) systems as well as the links and nodes that are part of the group’s supporting networks.

  1. Joint Force and Interagency Coordination
  2. The USG and its partners face a wide range of local, national, and transnational irregular challenges to the stability of the international system. Successful deterrence of non- state actors is more complicated and less predictable than in the past, and non-state actors may derive significant capabilities from state sponsorship.
  3. Adaptingtoanincreasinglycomplexworldrequiresunityofefforttocounterviolent extremism and strengthen regional security.

To improve understanding, USG departments and agencies should strive to develop strong relationships while learning to speak each other’s language, or better yet, use a common lexicon.

  1. At each echelon of command, the actions taken to achieve stability vary only in the amount of detail required to create an actionable picture of the enemy and the OE. Each echelon of command has unique functions that must be synchronized with the other echelons, as part of the overall operation to defeat the enemy. Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants. This is best achieved through an integrated approach. A common interagency assessment of the OE establishes a deep and shared understanding of the cultural, ideological, religious, demographic, and geographical factors that affect the conditions in the OE.
  2. Establishing a whole-of-government approach to achieve unity of effort should begin during planning. Achieving unity of effort is problematic due to challenges in information sharing, competing priorities, differences in lexicon, and uncoordinated activities.
  3. Responsibilities
  4. Operations against threat networks require unity of effort across the USG and multiple authorities outside DOD. Multiple instruments of national power will be operating in close proximity and often conducting complementary activities across the strategic, operational, and tactical levels. In order to integrate, deconflict, and synchronize the activities of these multiple entities, the commander should form a joint interagency coordination group, with representatives from all participants operating in or around the operational area.
  5. The military provides general support to a number of USG departments and agencies for their CTN activities ranging from CT to CD. A number of USG departments and agencies have highly specialized interests in threat networks, and their activities directly impact the military’s own CTN activities. For example, the Department of the Treasury’s CTF activities help to deny the threat network the funding needed to conduct operations.

CHAPTER II

THREAT NETWORK FUNDAMENTALS 1. Threat Network Construct

  1. Network Basic Components. All networks, regardless of size, share basic components and characteristics. Understanding common components and characteristics will help to develop and establish common joint terminology and standardize outcomes for network analysis, CTN planning, activities, and assessments across the joint force and CCMDs.
  2. Networks Terminology. A threat network consists of interconnected nodes and links and may be organized using subordinate and associated networks and cells. Understanding the individual roles and connections of each element is as important to conducting operations, as is understanding the overall network structure, known as the network topology.

Network boundaries must also be determined, especially when dealing with overlapping networks and global networks. Operations will rarely be possible against an entire threat or its supporting networks. Understanding the network topology allows planners to develop an operational approach and associated tactics necessary to create the desired effects against the network.

(1) Network. A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. Sometimes the terms network and system are synonymous. This publication uses the term network to distinguish threat networks from the multitude of other systems, such as an air defense system, communications system, transportation system, etc.

(2) Cell. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization.

(3) Node. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or OE that can be targeted for action. Nodes may fall into one or more PMESII categories.

(4) Link. A link is a behavioral, physical, or functional relationship between nodes.

Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying COGs, networks, and cells the JFC may wish to influence or change during an operation.

  1. Network Analysis
  2. Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes.

The PMESII systems perspective is a useful starting point for analysis of threat networks.

Network analysis facilitates identification of significant information about networks that might otherwise go unnoticed. For example, network analysis can uncover positions of power within a network, show the cells that account for its structure and organization, find individuals or cells whose removal would greatly alter the network, and facilitate measuring change over time.

  1. All networks are influenced by and in turn influence the OEs in which they exist. Analysts must understand the underlying conditions; the frictions between individuals and groups; familial, business, and governmental relationships; and drivers of instability that are constantly subject to change and pressures. All of these factors evolve as the networks change shape, increase or decrease capacity, and strive to influence and control things within the OE, and they contribute to or hinder the networks’ successes. Environmental framing is selecting, organizing, and interpreting and making sense of a complex reality; it serves as a guide for analyzing, understanding, and acting.
  2. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

(1) Catalyst. A catalyst is a condition or variable within the OE that could motivate or bind a group of individuals together to take some type of action to meet their collective needs. These catalysts may be identified as critical variables as units conduct their evaluation of the OE and may consist of a person, idea, need, event, or some combination thereof. The potential exists for the catalyst to change based on the conditions of the OE.

(2) Receptive Audience. A receptive audience is a group of individuals that feel they have more to gain by engaging in the activities of the network than by not participating. Additionally, in order for a network to form, the members of the network must have the motivation and means to conduct actions that address the catalyst that generated the network. Depending on the type of network and how it is organized, leadership may or may not be necessary for the network to form, survive, or sustain collective action. The receptive audience originates from the human dimension of the OE.

(3) Accommodating Environment. An accommodating environment is the conditions within the OE that facilitate the organization and actions of a network. Proper conditions must exist within the OE for a network to form to fill a real or perceived need. Networks can exist for a time without an accommodating environment, but without it the network will ultimately fail.

  1. Networks utilize the PMESII system structure within the OE to form, survive and function. Like the joint force, threat networks will also have desired end states and objectives. As analysis is being conducted of the OE, the joint staff should identify the critical variables within the OE for the network. A critical variable is a key resource or condition present within the OE that has a direct impact on the commander’s objectives and may affect the formation and sustainment of networks.
  2. Determining and Analyzing Node-Link Relationships

Links are derived from data or extrapolations based on data. A benefit of graphically portraying node-link relationships is that the potential impact of actions against certain nodes can become more evident. Social network analysis (SNA) provides a method that helps the JFC and staff understand the relevance of nodes and links. Network mapping is essential to conducting SNA.

  1. Link Analysis. Link analysis identifies and analyzes relationships between nodes in a network. Network mapping provides a visualization of the links between nodes, but does not provide the qualitative data necessary to fully define the links.

During link analysis, the analyst examines the conditions of the relationship, strong or weak, informal or formal, formed by familial, social, cultural, political, virtual, professional, or any other condition.

  1. Nodal Analysis. Individuals are associated with numerous networks due to their individual identities. A node’s location within a network and in relation to other nodes carries identity, power, or belief and influences behavior.

Examples of these types of identities include locations of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. These individual attributes are often collected during identity activities and fused with attributes from unrelated collection activities to form identity intelligence (I2) products. Some aspects used to help understand and define an individual are directly related to the conditions that supported the development of relationships to other nodes.

  1. Network Analysis. Throughout the JIPOE process, at every echelon and production category, one of the most important, but least understood, aspects of analysis is sociocultural analysis (SCA). SCA is the study, evaluation, and interpretation of information about adversaries and relevant actors through the lens of group-level decision making to discern catalysts of behavior and the context that shapes behavior. SCA considers relationships and activities of the population, SNA (looking at the interpersonal, professional, and social networks tied to an individual), as well as small and large group dynamics.

SNA not only examines individuals and groups of individuals within a social structure such as a terrorist, criminal, or insurgent organization, but also examines how they interact. Interactions are often repetitive, enduring, and serve a greater purpose, and the interaction patterns affect behavior. If enough nodes and links information can be collected, behavior patterns can be observed and, to some extent, predicted.

SNA differs from link analysis because it only analyzes similar objects (e.g., people or organizations), not the relationships between the objects. SNA provides objective analysis of current and predicted network structure and interaction of networks that have an impact on the OE.

  1. Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals.

Networks do not exist in a vacuum. They normally share nodes and links with other networks. Each network may require a unique operational approach as they adapt to their OE or to achieve new objectives. They may form a greater number of cells if they are capable of independent operations consistent with the threat network’s overall operational goals.

They may move to a more hierarchical system due to lack of leadership, questions regarding loyalty of subordinates, or inexperienced lower-level personnel. Understanding these dimensions allows a commander to craft a more effective operational approach. These cells are examples only. The list is neither exclusive nor inclusive. Each network and cell will change, adapt, and morph over time.

  1. Operational Cells. Operational cells carry out the day-to-day operations of the network and are typically people-based (e.g., terrorists, guerrilla fighters, drug dealers). It is extremely difficult to gather intelligence on and depict every single node and link within an operational network. However, understanding key nodes, links, and cells that are particularly effective allows for precision targeting and greater effectiveness.
  2. Logistical Cells. Logistical cells provide threat networks the necessary supplies, weapons, ammunition, fuel, and military equipment to operate. Logistical cells are easier to observe and target than operational or communications cells since they move large amounts of material, which makes them more visible. These cells may include individuals who are not as ideologically motivated or committed as those in operational networks.

Threat logistical cells often utilize legitimate logistics nodes and links to hide their activities “in the noise” of legitimate supplies destined for a local or regional economy.

  1. Training Cells. Most network leaders desire to grow the organization for power, prestige, and advancement of their goals. Logistical cells may be used to move material, trainers, and trainees into a training area, or that portion of logistics may be a distinct part of the training cells.

Training requires the aggregation of new personnel and often includes physical structures to support activities which may also be visible and provide additional information to better understand the network.

  1. Communications Cells. Most threat networks have at minimum rudimentary communications cells for operational, logistical and financial purposes and another to communicate their strategic narrative to a target or neutral population.

The use of Internet-based social media platforms by threat networks increases the likelihood of gathering information, including geospatial information.

  1. Financial Cells. Threat networks require funding for every aspect of their activities, to maintain and expand membership, and to spread their message. Their financial cell moves money from legitimate and illegitimate business operations, foreign donors, and taxes collected or coerced from the population to the operational area.
  2. WMD Proliferation Cells. Many of these cells are not organized specifically for the proliferation of WMD. In fact, many existing cells may be utilized out of convenience. Examples of existing cells include human trafficking, counterfeiting, and drug trafficking.

The JFC should use a systems perspective to better understand the complexity of the OE and associated networks. This perspective looks across the PMESII systems to identify the nodes, links, COGs, and potential vulnerabilities within the network.

  1. Analyze the Network

Key nodes exist in every major network and are critical to their function. Nodes may be people, places, or things. For example, a town that is the primary conduit for movement of illegal narcotics would be the key node in a drug trafficking network. Some may become decisive points for military operations since, when acted upon, they could allow the JFC to gain a marked advantage over the adversary or otherwise to contribute materially to achieving success. Weakening or eliminating a key node should cause its related group of nodes and links to function less effectively or not at all, while strengthening the key node could enhance the performance of the network as a whole. Key nodes often are linked to, resident in, or influence multiple networks.

Node centrality can highlight possible positions of importance, influence, or prominence and patterns of connections. A node’s relative centrality is determined by analyzing measurable characteristics: degree, closeness, betweenness, and eigenvector.

CHAPTER III

NETWORKS IN THE OPERATIONAL ENVIRONMENT

“How many times have we killed the number three in al-Qaida? In a network, everyone is number three.”

Dr. John Arquilla, Naval Postgraduate School

  1. Networked Threats and Their Impact on the Operational Environment
  2. In a world increasingly characterized by volatility, uncertainty, complexity, and ambiguity, a wide range of local, national, and transnational irregular challenges to the stability of the international system have emerged. Traditional threats like insurgencies and criminal gangs have been exploiting weak or corrupt governments for years, but the rise of transnational extremists and their active cooperation with traditional threats has changed the global dynamic.
  3. All networks are vulnerable, and a JFC and staff armed with a comprehensive understanding of a threat network’s structure, purpose, motivations, functions, interrelationships, and operations can determine the most effective means, methods, and timing to exploit that vulnerability.

Network analysis and exploitation are not simple tasks. Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. This de facto governance of an area by a threat network makes it more difficult for the joint force to simultaneously attack a threat and meet the needs of the population.

  1. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

the JTF requires active support and participation by USG, HN, nongovernmental agencies, and partners, particularly when it comes to addressing cross-border sanctuary, arms flows, and the root causes of instability. This “team of teams” approach facilitates unified action, which is essential for organizing for operations against an adaptive threat.

  1. Threat Network Characteristics

Threat networks do not differ much from non-threat networks in their functional organization and requirements. Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks involving people have been described as insurgent, criminal, terrorist, social, political, familial, tribal, religious, academic, ethnic, or demographic. Some non-human networks include communications, financial, business, electrical/power, water, natural resources, transportation, or informational. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations. Individual network components are identifiable, targetable, and exploitable. Almost universally, humans are members of more than one network, and most networks rely on other networks for sustainment or survival.

Organized threats leverage multiple networks within the OE based on mission requirements or to achieve objectives not unilaterally achievable. The following example shows some typical networks that a threat will use and/or exploit. This “network of networks” is always present and presents challenges to the JFC when planning operations to counter threats that nest within various friendly, neutral, and hostile networks

  1. Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Survival and success are directly connected to adaptability and the ability to access financial, logistical, and human resources. Networks possess many characteristics important to their success and survival, such as flexible C2 structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt. They must also have a steady stream of resources and may require a sanctuary (safe haven) from which to regroup and plan.

  1. C2 Structure. There are many potential designs for the threat network’s internal organization. Some are hierarchical, some flat, and others may be a combination. The key is that to survive, networks adapt continuously to changes in the OE, especially in response to friendly actions. Commanders must be able to recognize changes in the threat’s C2 structures brought about by friendly actions and maintain pressure to prevent a successful threat reconstitution.
  2. Shared Identity. Shared identity among the membership is normally based on kinship, ideology, religion, and personal relationships that bind the network and facilitate recruitment. These identity attributes can be an important part of current and future identity activities efforts, and analysis can be initiated before hostilities are imminent.
  3. Knowledge, Skills, and Abilities of Group Leaders and Members. All threat networks have varying degrees of proficiency. In initial stages of development, a threat organization and its members may have limited capabilities. An organization’s survival rests on the knowledge, skills, and abilities of its leadership and membership. By seeking out subject matter expertise, financial backing, or proxy support from third parties, an organization can increase their knowledge, skills, and abilities, making them more adaptable and increasing their chance for survival.
  4. Resources. Resources in the form of arms, money, technology, social connectivity, and public recognition are used by threat networks. Identification and systematic strangulation of threat resources is the fundamental principle for CTN. For example, money is one of the critical resources of adversary networks. Denying the adversary its finances makes it harder, and perhaps impossible to pay, train, arm, feed, and clothe forces or gather information and produce the propaganda.
  5. Adaptability. This includes the ability to learn and adjust behaviors; modify tactics, techniques, and procedures (TTP); improve communications security and operations security; successfully employ IRCs; and create solutions for safeguarding critical nodes and reconstituting expertise, equipment, funding, and logistics lines that are lost to friendly disruption efforts. Analysts conduct trend analysis and examine key indicators within the OE that might suggest how and why networks will change and adapt. Disruption efforts will often provoke a network’s changing of its methods or practices, but often external influences, local relationships and internal friction, geographic and climate challenges, and global economic factors might also be some of the factors that motivate a threat network to change or adapt to survive.
  6. Sanctuary (Safe Havens). Safe havens allow the threat networks to conduct planning, training, and logistic reconstitution. Threat networks require certain critical capabilities (CCs) to maintain their existence, not the least of which are safe havens from which to regenerate combat power and/or areas from which to launch attacks.
  7. Network Engagement
  8. Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement.
  9. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.
  10. Individuals may be associated with numerous networks due to their unique identities. Examples of these types of identities include location of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. Therefore, it is not uncommon for an individual to be associated with more than one type of network (friendly, neutral, or threat). Individual identities provide the basis that allows for the interrelationship between friendly, neutral, and threat networks to exist. It is this interrelationship that makes categorizing networks a challenge. Classifying a network as friendly or neutral when in fact it is a threat may provide the network with too much freedom or access. Mislabeling a friendly or neutral network as a threat may cause actions to be taken against that network that can have unforeseen consequences.
  11. Networks are comprised of individuals who are involved in a multitude of activities, including social, political, monetary, religious, and personal. These human networks exist in every OE, and therefore network engagement activities will be conducted throughout all phases of the conflict continuum and across the range of operations.
  12. Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups (see Figure III-3). These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose

Many threat networks rely on family and tribal bonds when recruiting for the network’s inner core. These members have been vetted for years and are almost impossible to turn. For analysts, identifying family and tribal affiliations assists in developing a targetable profile on key network personnel. Even criminal networks will tend to be densely populated by a small number of interrelated identity groups.

  1. Family Network. Some members or associates have familial bonds. These bonds may be cross-generational.
  2. Cultural Network. Network links can share affinities due to culture, which include language, religion, ideology, country of origin, and/or sense of identity. Networks may evolve over time from being culturally based to proximity based.
  3. Proximity Network. The network shares links due to geographical ties of its members (e.g., past bonding in correctional or other institutions or living within specific regions or neighborhoods). Members may also form a network with proximity to an area strategic to their criminal interests (e.g., a neighborhood or key border entry point). There may be a dominant ethnicity within the group, but they are primarily together for reasons other than family, culture, or ethnicity.
  4. Virtual Network. A network that may not physically meet but work together through the Internet or other means of communication, for legitimate or criminal purposes (e.g., online fraud, theft, or money laundering).
  5. Specialized Networks. Individuals in this network come together to undertake specific activities based on the skills, expertise, or particular capabilities they offer. This may include criminal activities.
  6. Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. A network may also be in a state of transition and therefore difficult to classify.

  1. Threat networks

Threat networks may be formally intertwined or come together when mutually beneficial. This convergence (or nexus) between threat networks has greatly strengthened regional instability and allowed threats and alliances to increase their operational reach and power to global proportions.

  1. Identify a Threat Network

Threat networks often attempt to remain hidden. How can commanders determine not only which networks are within an operational area, but also which pose the greatest threat?

By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. For example, all networks require communications, resources, and people. By understanding the functions of a network, commanders can make educated assumptions as to their makeup and determine not only where they are, but also when and how to engage them. As previously stated, there are many neutral networks that are used by both friendly and threat forces; the difficult part is determining what networks are a threat and what networks are not. The “find” aspect of the find, fix, finish, exploit, analyze, and disseminate (F3EAD) targeting methodology is initially used to discover and identify networks within the OE. The F3EAD methodology is not only used for identifying specific actionable targets; it is also used to uncover the nature, functions, structures, and numbers of networks within the OE. A thorough JIPOE product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

CHAPTER IV

PLANNING TO COUNTER THREAT NETWORKS

  1. Joint Intelligence Preparation of the Operational Environment and Threat Networks
  2. A comprehensive, multidimensional assessment of the OE will assist commanders and staffs in uncovering threat network characteristics and activities, develop focused operations to attack vulnerabilities, better anticipate both the intended and unintended consequences of threat network activities and friendly countermeasures, and determine appropriate means to assess progress toward stated objectives.
  3. Joint force, component, and supporting commands and staffs use JIPOE products to prepare estimates used during mission analysis and selection of friendly courses of action (COAs). Commanders tailor the JIPOE analysis based on the mission. As previously discussed, the best COA may not be to destroy a threat’s entire network or cells; friendly or neutral populations may use the same network or cells, and to destroy it would have a negative effect.
  4. Understanding the Threat’s Network
  5. The threat has its own version of the OE that it seeks to shape to maintain support and attain its goals. In many instances, the challenge facing friendly forces is complicated by the simple fact that significant portions of a population might consider the threat as the “home team.” To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. The first step in understanding a network is to develop a network profile through analysis of a network’s critical factors.
  6. COG and Critical Factors Analysis (CFA). One of the most important tasks confronting the JFC and staff during planning is to identify and analyze the threat’s network, and in most cases the network’s critical factors (see Figure IV-1) and COGs.
  7. Network Function Template. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate CFA. Building a network function template is not a requirement for conducting CFA, but helps the staff to visualize the interactions between functions and supporting structure within a network.
  8. Critical Factors Analysis
  9. CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning. The critical factors are the CCs, critical requirements (CRs), and CVs.

Key terminology for CFA includes:

(1) COG for network analysis is a conglomeration of tangible items and/or intangible factors that not only motivates individuals to join a network, but also promotes their will to act to achieve the network’s objectives and attain the desired end state. A COG for networks will often be difficult to target directly due to complexity and accessibility.

(2) CCs are the primary abilities essential to accomplishing the objective of the network within a given context. Analysis to identify CCs for a network is only possible with understanding the structure and functions of a network, which is supported by other network analysis methods.

(3) CRs are the essential conditions, resources, and means the network requires to perform the CC. These things are used or consumed to carry out action, enabling a CC to wholly function. Networks require resources to take action and function. These resources include personnel, equipment, money, and any other commodity that support the network’s CCs.

(4) CVs are CRs or components thereof that are deficient or vulnerable to neutralization, interdiction, or attack in a manner that achieves decisive results. A network’s CVs will change as networks adapt to conditions within the OE. Identification of CVs for a network should be considered during the targeting process, but may not necessarily be a focal point of operations without further analysis.

  1. Building a network function template involves several steps:

(1) Step 1: Identify the network’s desired end state. The network’s desired end state is associated with the catalyst that supported the formation of the network. The primary question that the staff needs to answer is what are the network’s goals? The following are examples of desired end states for various organizations:

(a) Replacing the government of country X with an Islamic caliphate.

(b) Liberating country X.
(c) Controlling the oil fields in region Y.
(d) Establishing regional hegemony.

(e) Imposing Sharia on village Z.

(f) Driving multinational forces out of the region.

 

(2) Step2: Identify possible ways or actions (COAs) that can attain the desired end state. This step refers to ways a network can take actions to attain their desired end state through their COAs. Similar in nature to how staffs analyze a conventional force to determine the likely COA that force will take, this must also be done for the networks that are selected for engagement. It is important to note that each network will have a variety of options available to them and their likely COA will be associated with the intent of the members of the network. Examples of ways for some networks may include:

(a) Conducting an insurgency operation or campaign. (b) Building PN capacity.
(c) Attacking with conventional military forces.
(d) Conducting acts of terrorism.

(e) Seizing the oil fields in Y.
(f) Destroying enemy forces.
(g) Defending village Z.
(h) Intimidating local leaders.
(i) Controlling smuggling routes.

(j) Bribing officials

 

(3) Step 3: Identify the functions that the network possesses to take actions. Using the network function template from previous analysis, the staff must refine this analysis to identify the functions within the network that could be used to support the potential ways or COAs for the network. The functions identified result in a list of CCs. Examples of items associated with the functions of a network that would support the example list of ways identified in the previous step are:

(a) Conducting an insurgency operation or campaign: insurgents are armed and can conduct attacks.

(b) Building PN capacity: forces and training capability available.

(c) Attacking with conventional military forces: military forces are at an operational level with C2 in place.

(d) Conducting acts of terrorism: network members possess the knowledge and assets to take action.

(e) Seizing the oil fields in Y: network possesses the capability to conduct coordinated attack.

(f) Destroying enemy forces: network has the assets to identify, locate, and destroy enemy personnel.

(g) Defending village Z: network possesses the capabilities and presence to conduct defense.

(h) Intimidating local leaders: network has freedom of maneuver and access to local leaders.

(i) Controlling smuggling routes: network’s sphere of influence and capabilities allow for control.

(j) Bribing officials: network has access to officials and resources to facilitate

bribes

(4) Step4:Listthemeansorresourcesavailableorneededforthenetworkto execute CCs. The purpose of this step is to determine the CRs for the network. Again, this is support from the initial analysis conducted for the network, network mapping, link analysis, SNA, and network function template. Based upon the CCs identified for the network, the staff must answer the question what resources must the network possess to employ the CCs identified? The list of CRs can be extensive, depending on the capability being analyzed. The following are examples of CRs that may be identified for a network:

(a) A group of foreign fighters.
(b) A large conventional military.
(c) A large conventional military formation (e.g., an armored corps). (d) IEDs.
(e) Local fighters.
(f) Arms and ammunition.
(g) Funds.
(h) Leadership.
(i) A local support network.

(5) Step 5: Correlate CCs and CRs to OE evaluation to identify critical variables.

(a) Understanding the CCs and CRs for various networks can be used alone in planning and targeting, but the potential to miss opportunities or accept additional risks are not understood until the staff relates these items to the analysis of the OE.

(b) A critical variable may be a CC, CR, or CV for multiple networks. Gaining an understanding of this will occur in the next step of CFA. The following are examples of critical variables that may be identified for networks:

  1. A group of foreign fighters is exposed for potential engagement.
  2. A large conventional military formation (e.g., an armored corps) is located and likely COA is identified.
  3. IED maker and resources are identified and can be neutralized. 4. Local fighters’ routes of travel and recruitment are identifiable. 5. Arms and ammunition sources of supply are identifiable.
    6. Funds are located and potential exists for seizure.
  4. Leadership is identified and accessible for engagement.
  5. A local support network is identified and understood through analysis.

(6) Step 6: Compare and contrast the CRs for each network analyzed. This step of CFA can only be accomplished after full network analysis has been completed for all selected networks within the OE. To compare and contrast, the information from the analysis of each network must be available. The intent of correlating the critical variables for each network allows for understanding:

(a) Potential desired first- and second-order effects of engagement.

(b) Potential undesired first- and second-order effects of engagement.

(c) Direct engagement opportunities.
(d) Indirect engagement opportunities.

(7) Step 7: Identify CVs for the network. Identifying CVs of a network is completed by analyzing each CR for the network with respect to criticality, accessibility, recuperability, and adaptability. This analysis is conducted from the perspective of the network with consideration of threats within the OE that may impact the network being

analyzed. Conducting the analysis from this perspective allows staffs to identify CVs for any type of network (friendly, neutral, or threat).

(a) Criticality. A CR that when engaged by a threat results in a degradation of the network’s structure, function or impact on its ability to sustain itself. Criticality considers the importance of the CR to the network and the following questions should be considered when conducting this analysis:

  1. What impact will removing the CR have on the structure of the network?
  2. What impact will removing the CR have on the functions of the network?
  3. What function is affected by engaging the CR?
    4. What effect does the CR have on other networks?
    5. Is the CR a CR for other networks? If so, which ones?
  4. How is the CR related to conditions of sustainment?

 

(b) Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Accessibility of the CR in some cases is a limiting factor for the true vulnerability of a CR.

The following questions should be considered by the staff when analyzing a CR for accessibility:

  1. Where is the CR?
  2. Is the CR protected?
  3. Is the CR static or mobile?
  4. Who interacts with the CR? How often?
  5. Is the CR in the operational area of the threat to the network?
  6. Can the CR be engaged with threat capabilities?
  7. If the CR is inaccessible, are there alternative CRs that if engaged by a threat result in a similar effect on the network?

(c) Recuperability. The amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Analyzing the CR in regard to recuperability is associated to the network’s ability to regenerate when components of its structure have been removed or damaged. This plays a role in the adaptive nature of a network, but must not be confused with the last aspect of the analysis for CVs. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. If CR is removed:
    a. Can the CR be replaced?
  2. How long will it take to replace?
    c. Does the replacement fulfill the network’s structural and functional levels?
  3. Will the network need to make adjustments to implement the replacement for the CR?
  4. If CR is damaged:
    a. Can the CR be repaired?
    b. How long will it take to repair?
    c. Will the repaired CR return the network to its previous structural and functional levels?

(d) Adaptability. The ability of a network (with which the CR is associated) to change in response to conditions in the OE brought about by the actions of a threat taken against it, while maintaining its structure and function.

Adaptability considers the network’s ability to change or modify their functions, modify their catalyst, shift focus on potential receptive audience(s), or make any other changes to adapt to the conditions in the OE. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. Can the CR change its structure while maintaining its function?
  2. Is the CR tied to a CC that could cause it to adapt as a normal response to a change in a CC (whether due to hostile engagement or a natural change brought about by a friendly network’s adjustment to that CC)?
  3. Can the CR be changed to fulfill an emerging CC or function for the network?

 

  1. Visualizing Threat Networks
  2. Mapping the Network. Mapping threat networks starts by detailing the primary threats (e.g., terrorist group, drug cartel, money-laundering group). Mapping routinely starts with people and places and then adds functions, resources, and activities.

Mapping starts out as a simple link between two nodes and progresses to depict the organizational structure (see Figure IV-4). Individual network members themselves may not be aware of the organizational structure. It will be rare that enough intelligence and information is collected to portray an entire threat network and all its cells.

This will be a continuous process as the networks themselves transform and adapt to their environment and the joint force operations. To develop and employ theater-strategic options, the commander must understand the series of complex, interconnected relationships at work within the OE.

(1) Chain Network. The chain or line network is characterized by people, goods, or information moving along a line of separated contacts with end-to-end communication traveling through intermediate nodes.

(2) Star or Hub Network. The hub, star, or wheel network, as in a franchise or a cartel, is characterized by a set of actors tied to a central (but not hierarchical) node or actor that must communicate and coordinate with network members through the central node.

(3) All-Channel Network. The all-channel, or full-matrix network, is characterized by a collaborative network of groups where everybody connects to everyone else.

  1. Mapping Multiple Networks. Each network may be different in structure and purpose. Normally the network structure is fully mapped, and cells are shown as they relate to the larger network. It is time- and labor-intensive to map each network, so staffs should carefully consider the usefulness for how much time and effort they should allocate toward mapping the supporting networks and where to focus their efforts so that they are both providing a timely response and accurately identifying relationships and critical nodes significant for disruption efforts.
  2. Identify the Influencing Factors of the Network. Influencing factors of the network (or various networks) within an OE can be identified largely by the conditions created by the activities of the network. These conditions are what influence the behaviors, attitudes, and vulnerabilities of specific populations. Factors such as threat information activities (propaganda) may be one of the major influencers, but so are activities such as kidnapping, demanding protection payments, building places of worship, destroying historical sites, building schools, providing basic services, denying freedom of movement, harassment, illegal drug activities, prostitution, etc. To identify influencing factors, a proven method is to first look at the conditions of a specific population or group, determine how those conditions create/force behavior, and then determine the causes of the conditions. Once influence factors are identified, the next step is to determine if the conditions can be changed and/or if they cannot, determine if there is alternative, viable behavior available to the population or group.
  3. To produce a holistic view of threat, neutral, and friendly networks as a whole within a larger OE requires analysis to describe how these networks interrelate. Most important to this analysis is describing the relationships within and between the various networks that directly or indirectly affect the mission.
  4. Collaboration. Within most efforts to produce a comprehensive view of the networks, certain types of data or information may not be available to correctly explain or articulate with great detail the nature of relationships, capabilities, motives, vulnerabilities, or communications and movements. It is incumbent upon intelligence organizations to collaborate and share information, data, and analysis, and to work closely with interagency partners to respond to these intelligence gaps.
  5. Targeting Evaluation Criteria

Once the network is mapped, the JFC and staff identify network nodes and determine their suitability for targeting. A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis (see Figure IV-5). CARVER is a subjective and comparative system that weighs six target characteristic factors and ranks them for targeting and planning decisions. CARVER analysis can be used at all three levels of warfare: tactical, operational, and strategic. Once target evaluation criteria are established, target analysts use a numerical rating system (1 to 5) to rank the CARVER factors for each potential target. In a one to five numbering system, a score of five would indicate a very desirable rating while a score of one would reflect an undesirable rating.

A notional network-related CARVER analysis is provided in paragraph 6, “Notional Network Evaluation.” The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors:

  1. Network Affiliations. Network affiliations identify each network of interest associated with the CR being evaluated. The importance of understanding the network affiliations for a potential target stems from the interrelationships between networks. Evaluating a potential target from the perspective of each affiliated network will provide the joint staff with potential second- and third-order effects on both the targeted threat networks and other interrelated networks within the OE.
  2. Criticality. Criticality is a CR that when engaged by a threat results in a degradation of the network’s structure, function, or impact on its ability to sustain itself. Evaluating the criticality of a potential target must be accomplished from the perspective of the target’s direct association or need for a specific network. Depending on the functions and structure of the network, a potential target’s criticality may differ between networks. Therefore, criticality must be evaluated and assigned a score for each network affiliation. If the analyst has completed CFA for the networks of interest, criticality should have been analyzed during the identification of CVs.
  3. Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Inaccessible CRs may require alternate target(s) to produce desired effects. The accessibility of a potential target will remain the same, regardless of network affiliation. This element of CARVER does not require a separate evaluation of the potential target for each network. Much like criticality, accessibility will have been evaluated if the analyst has conducted CFA for the network as part of the analysis for the network.
  4. Recuperability. Recuperability is the amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Recuperability is analyzed during CFA to determine the vulnerability of a CR for the network. Since CARVER (network) is applied to evaluate the potential targets with each affiliated network, the evaluation for recuperability will differ for each network. What affects recuperability is the network’s function of regenerating members or replacing necessary assets with suitable substitutes.
  5. Vulnerability. A target is vulnerable if the operational element has the means and expertise to successfully attack the target. When determining the vulnerability of a target, the scale of the critical component needs to be compared with the capability of the attacking element to destroy or damage it. The evaluation of a potential target’s vulnerability is supported by the analysis conducted during CFA and can be used to complete this part of the CARVER (network) matrix. Vulnerability of a potential target will consist of only one value. Regardless of the network of affiliation, vulnerability is focused on evaluating available capabilities to effectively conduct actions on the target.
  6. Effect. This evaluates the potential effect on the structure, function, and sustainment of a network of engaging the CR as it relates to each affiliated network. The level of effect should consider both the first-order effect on the target itself, as well as the second-order effect on the structure and function of the network.
  7. Recognizability.RecognizabilityisthedegreetowhichaCRcanberecognizedby an operational element and/or intelligence collection under varying conditions. The recognizability of a potential target will remain the same, regardless of network of affiliation.
  8. Notional Network Evaluation
  9. The purpose of conventional target analysis (and the use of CARVER) is to determine enemy critical systems or subsystems to attack to progressively destroy or degrade the adversary’s warfighting capacity and will to fight.
  10. Using network analysis, a commander identifies the critical threat nodes operating within the OE. A CARVER analysis determines the feasibility of attacking each node (ideally simultaneously). While each CARVER value is subjective, detailed analysis allows planners to assign a realistic value.

The commander and the staff then look at other aspects of the network and, for example, determine whether they can disrupt the material needed for training, prevent the movement of trainees or trainers to the training location, or influence other groups to deny access to the area.

  1. The JFC and staff methodically analyze each identified network node and assign a numerical rating to each. In this notional example (see Figure IV-7), it is determined that the communications cells and those who finance threat operations provide the best targets to attack.
  2. Planning operations against threat networks does not differ from standard military planning. These operations still support the JFC’s broader mission and rarely stand alone. Identifying threat networks requires detailed analysis and consideration for second- and third-order effects. It is important to remember that the threat organization itself is the ultimate target and their networks are merely a means to achieve that. Neutralizing a given network may prove more beneficial to the JFC’s mission accomplishment than destroying a single multiuser network node. The most effective plans call for simultaneous operations against networks focused on multiple nodes and network functions.
  3. Countering Threat Networks Through the Planning of Phases

As previously discussed, commanders execute CTN activities across all levels of warfare.

Threat networks can be countered using a variety of approaches and means. Early in the operation or campaign, the concept of operations will be based on a synchronized and integrated international effort (USG, PNs, and HN) to ensure that conditions in the OE do not empower a threat network and to deny the network the resources it requires to expand its operations and influence. As the threat increases and conditions deteriorate, the plan will adjust to include a broader range of actions, and an increase in the level and focus of targeting of identified critical network nodes: people and activities. Constant pressure must be maintained on the network’s critical functions to deny them the initiative and disrupt their operating tempo.

Figure IV-8 depicts the notional operation plan phase construct for joint operations. Some phases may not be used during CTN activities.

  1. Shape (Phase 0)

(1) Unified action is the key to shaping the OE. The goal is to deny the threat network the resources needed to expand their operations and reduce it to a point where they no longer pose a direct threat to regional/local stability, while influencing the network to reduce or redirect its threatening objectives. Shaping operations against threat networks consist of efforts to influence their objectives, dissuade growth, state sponsorship, sanctuary, or access to resources through the unified efforts of interagency, regional, and international partners as well as HN civil authorities. Actions are taken to identify key elements in the OE that can be used to leverage support for the government or other friendly networks that must be controlled to deny the threat an operational advantage. The OE must be analyzed to identify support for the threat network, as well as that for the relevant friendly and neutral networks. Interagency/international partners help to identify the network’s key components, deny access to resources (usually external to the country), and persuade other actors (legitimate and illicit) to discontinue support for the threat. SIGINT, open-source intelligence (OSINT), and human intelligence (HUMINT) are the primary intelligence sources of actionable information. The legitimacy of the government must be reinforced in the operational area. Efforts to reinforce the government seek to identify the sources of friction within the society that can be reduced through government intervention.

Many phase I shaping activities need to be coordinated during phase 0 due to extensive legal and interagency requirements.

Due to competing resources and the potential lack of available IRCs, executing IO during phase 0 can be challenging. For this reason, consideration must be given on how IRCs can be integrated as part of the whole-of-government approach to effectively shape the information environment and to achieve the commander’s information objectives.

Shaping operations may also include security cooperation activities designed to strengthen PN or regional capabilities and capacity that contribute to greater stability. Shaping operations should focus on changing the conditions that foster the development of adversaries and threats.

(2) During phase 0 (shaping), the J-2’s threat network analysis initially provides a broad description of the structure of the underlying threat organization; identifies the critical functions, nodes, and the relationships between the threat’s activities and the greater society; and paints a picture of the “on-average” relationships.

Some of the CTN actions require long- term and sustained efforts, such as addressing recruitment in targeted communities through development programming. It is essential that the threat is decoupled from support within the affected societies. Critical elements in the threat’s operational networks must be identified and disrupted to affect their operating tempo. Even when forces are committed, the commander continues to shape the OE using various means to eliminate the threat and undertake actions, in cooperation with interagency and multinational partners, to reinforce the legitimate government in the eyes of the population.

(3) The J-2 seeks to identify and leverage information sources that can provide details on the threat network and its relationship to the regional/local political, economic, and social structures that can support and sustain it.

(4) Sharing information and intelligence with partners is paramount since collection, exploitation, and analysis against threat networks requires much greater time than against traditional military adversaries. Information sharing with partners must be balanced with operations security and cannot be done in every instance. Intelligence sharing between CCDRs across regional and functional seams provides a global picture of threat networks not bound by geography. Intelligence efforts within the shaping phase show threat network linkages in terms of leadership, organization, size, scope, logistics, financing, alliances with other networks, and membership.

  1. Deter (Phase I). The intent of this phase is to deter threat network action, formation, or growth by demonstrating partner, allied, multinational, and joint force capabilities and resolve. Many actions in the deter phase include security cooperation activities and IRCs and/or build on security cooperation activities from phase 0. Increased cooperation with partners and allies, multinational forces, interagency and interorganizational partners, international organizations, and NGOs assist in increasing information sharing and provide greater understanding of the nature, capabilities, and linkages of threat networks.

enhance deterrence through unified action by collaborating with all friendly elements and by creating a friendly network of organizations and people with far-reaching capabilities and the ability to respond with pressure at multiple points against the threat network.

Phase I begins with coordination activities to influence threat networks on multiple fronts.

Deterrent activities executed in phase I also prepare for phase II by conducting actions throughout the OE to isolate threat networks from sanctuary, resources, and information networks and increase their vulnerability to later joint force operations.

  1. Seize Initiative (Phase II). JFCs seek to seize the initiative through the application of joint force capabilities across multiple LOOs.

Destruction of a single node or cell might do little to impact network operations when assessed against the cost of operations and/or the potential for collateral damage.

As in traditional offensive operations against a traditional adversary, various operations create conditions for exploitation, pursuit, and ultimate destruction of those forces and their will to fight.

  1. Dominate (Phase III). The dominate phase against threat networks focuses on creating and maintaining overwhelming pressure against network leadership, finances, resources, narrative, supplies, and motivation. This multi-front pressure should include diplomatic and economic pressure at the strategic level and informational pressure at all levels.

They are then synchronized with military operations conducted throughout the OE and at all levels of warfare to achieve the same result as traditional operations, to shatter enemy cohesion and will. Operations against threat networks are characterized by dominating and controlling the OE through a combination of traditional warfare, irregular warfare, sustained employment of interagency capabilities, and IRCs.

  1. Stabilize (Phase IV). The stabilize phase is required when there is no fully functioning, legitimate civilian governing authority present or the threat networks have gained political control within a country or region. In cases where the threat network is government aligned, its defeat in phase III may leave that government intact, and stabilization or enablement of civil authority may not be required. After neutralizing or defeating the threat networks (which may have been functioning as a shadow government), the joint force may be required to unify the efforts of other supporting/contributing multinational, international organization, NGO, or USG department and agency participants into stability activities to provide local governance, until legitimate local entities are functioning.
  2. Enable Civil Authority (Phase V). This phase is predominantly characterized by joint force support to legitimate civil governance in the HN. Depending upon the level of HN capacity, joint force activities during phase V may be at the behest or direction of that authority. The goal is for the joint force to enable the viability of the civil authority and its provision of essential services to the largest number of people in the region. This includes coordinating joint force actions with supporting or supported multinational and HN agencies and continuing integrated finance operations and security cooperation activities to favorably influence the target population’s attitude regarding local civil authority’s objectives.

CHAPTER V

ACTIVITIES TO COUNTER THREAT NETWORKS

“Regional players almost always understand their neighborhood’s security challenges better than we do. To make capacity building more effective, we must leverage these countries’ unique skills and knowledge to our collect[ive] advantage.”

General Martin Dempsey, Chairman of the Joint Chiefs of Staff

Foreign Policy, 25 July 2014, The Bend of Power

 

  1. The Challenge

A threat network can be operating for years in the background and suddenly explode on the scene. Identifying and countering potential and actual threat networks is a complex challenge.

  1. Threat networks can take many forms and have many distinct participants from terrorists, to criminal organizations, to insurgents, locally or transnationally based…

Threat networks may leverage technologies, social media, global transportation and financial systems, and failing political systems to build a strong and highly redundant support system. Operating across a region provides the threat with a much broader array of resources, safe havens, and flexibility to react to attack and prosecute their attacks.

To counter a transnational threat, the US and its partners must pursue multinational cooperation and joint operations to achieve disruption and cooperate with HNs within a specified region in order to fully identify, describe, and mitigate via multilateral operations the transnational networks that threaten an entire region and not just individual HNs.

  1. Successfuloperationsarebasedontheabilityoffriendlyforcestodevelopandapply a detailed understanding of the structure and interactions of the OE to the planning and execution of a wide array of capabilities to reinforce the HN’s legitimacy and neutralize the threat’s ability to threaten that society.
  2. Targeting Threat Networks
  3. The commander and staff must understand the desired condition of the threat network as it relates to the commander’s objectives and desired end state as the first step of targeting any threat network.
  4. The military end state that is desired is directly related to conditions of the OE. Interrelated human networks comprise the human aspect of the OE, which includes the threat networks that are to be countered. The actual targeting of threat networks begins early in the planning process, since all actions taken must be supportive in achieving the commander’s objectives and attaining the end state. To feed the second phase of the targeting cycle, the threat network must be analyzed using network mapping, link analysis, SNA, CFA, and nodal analysis.
  5. The second phase of the joint targeting cycle is intended to begin the development of target lists for potential engagement. JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, CCs (network’s functions), and CRs for the network.

Similar to developing an assessment plan for operations as part of the planning process, the metrics for assessing networks must be developed early in the targeting cycle.

  1. Networks operate as integrated entities—the whole is greater than the sum of its parts. Identifying and targeting the network and its functional components requires patience. A network will go to great lengths to protect its critical components. However, the interrelated nature of network functions means that an attack on one node may have a ripple effect as the network reconstitutes.

Whenever a network reorganizes or adapts, it can expose a larger portion of its members (nodes), relationships (links), and activities. Intelligence collection should be positioned to exploit any effects from the targeting effort, which in turn must be continuous and multi-nodal.

  1. The analytical products for threat networks support the decision of targets to be added to or removed from the target list and specifics for the employment of capabilities against a target. The staff should consider the following questions when selecting targets to engage within a threat network:

(1) Who or what to target? Network analysis provides the commander and staff with the information to prioritize potential targets. Depending on the effect desired for a network, the selected node for targeting may be a person, key resource, or other physical object that is critical in producing a specific effect on the network.

(2) What are the effects desired on the target and network? Understanding the conditions in the OE and the future conditions desired to achieve objectives supports a decision on what type of effects are desired on the target and the threat network as a whole. The desired effects on the threat network should be aligned with the commander’s intent that support objectives or conditions of the threat network to meet a desired end state.

(3) How will those desired effects be produced? The array of lethal and nonlethal capabilities may be employed with the decision to engage a target, whether directly or indirectly. In addition to the ability to employ conventional weapons systems, staffs must consider nonlethal capabilities that are available.

  1. Desired Effects on Networks
  2. Damage effects on an enemy or adversary from lethal fires are classified as light, moderate, or severe. Network engagement takes into consideration the effects of both lethal and nonlethal capabilities.
  3. When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The intended result of shaping these conditions is to support achieving the commander’s objectives. The desired effects selected are the result of the commander’s vision on the future conditions for the threat networks and within the OE to achieve objectives.

Terms that are used to describe the desired effects of CTN include:

(1) Neutralize. Neutralize is a tactical mission task that results in rendering enemy personnel or materiel incapable of interfering with a particular operation. The threat network’s structure exists to facilitate its ability to perform functions that support achieving its objectives. Neutralization of an entire network may not be feasible, but through analysis, the staff has the ability to identify key parts of the threat network’s structure to target that will result in the neutralization of specific functions that may interfere with a particular operation.

(2) Degrade. To degrade is to reduce the effectiveness or efficiency of a threat. The effectiveness of a threat network is associated with its ability to function as desired to achieve the threat’s objectives. Countering the effectiveness of a network may be accomplished by eliminating CRs that the network requires to facilitate an identified CC, identified through the application of CFA for the network.

(3) Disrupt. Disrupt is a tactical mission task in which a commander integrates direct and indirect fires, terrain, and obstacles to upset an enemy’s formation or tempo, interrupt the enemy’s timetable, or cause enemy forces to commit prematurely or attack in a piecemeal fashion. From the perspective of disrupting a threat network, the staff should consider the type of operation being conducted, specific functions of the threat network, and conditions within the OE that can be leveraged and potential application of both lethal and nonlethal capabilities. Additionally, the staff should consider the potential impact and duration of time that disrupting the threat network will present in opportunities for friendly forces to exploit a potential opportunity. Should the disruption result in the elimination of key nodes from the network, the staff must also consider the network’s means and time necessary to reconstitute.

(4) Destroy. Destroy is a tactical mission task that physically renders an enemy force combat ineffective until it is reconstituted. Alternatively, to destroy a combat system is to damage it so badly that it cannot perform any function or be restored to a usable condition without being entirely rebuilt. Destroying a threat network that is adaptive and transnationally established is an extreme challenge that requires the full collaboration of DOD and intergovernmental agencies, as well as coordination with partnered nations. Isolated destruction of cells may be more plausible and could be accomplished with the comprehensive application of lethal and nonlethal capabilities. Detailed analysis of the cell is necessary to establish a baseline (pre-operation conditions) in order to assess if operations have resulted in the destruction of the selected portion of a network.

(5) Defeat. Defeat is a tactical mission task that occurs when a threat network or enemy force has temporarily or permanently lost the physical means or the will to fight. The defeated force’s commander or leader is unwilling or unable to pursue that individual’s adopted COA, thereby yielding to the friendly commander’s will, and can no longer interfere to a significant degree with the actions of friendly forces. Defeat can result from the use of force or the threat of its use. Defeat manifests itself in some sort of physical action, such as mass surrenders, abandonment of positions, equipment and supplies, or retrograde operations. A commander or leader can create different effects against an enemy to defeat that force.

(6) Deny. Deny is an action to hinder or deny the enemy the use of territory, personnel, or facilities to include destruction, removal, contamination, or erection of obstructions. An example of deny is to destroy the threat’s communications equipment as a means of denying his use of the electromagnetic spectrum. However, the duration of denial will depend on the enemy’s ability to reconstitute.

(7) Divert. To divert is to turn aside or from a path or COA. A diversion is the act of drawing the attention and forces of a threat from the point of the principal operation; an attack, alarm, or feint diverts attention. Diversion causes threat networks or enemy forces to consume resources or capabilities critical to threat operations in a way that is advantageous to friendly operations. Diversions draw the attention of threat networks or enemy forces away from critical friendly operations and prevent threat forces and their support resources from being employed for their intended purpose.

  1. Engagement Strategies
  2. Counter Resource. A counter-resource approach can progressively weaken the threat’s ability to conduct operations in the OE and require the network to seek a suitable substitute to replace eliminated or constrained resources. Like a military organization, a threat’s network or a threat’s organization is more than its C2 structure. It must have an assured supply of recruits, food, weapons, and transportation to maintain its position and grow. While the leadership provides guidance to the network, it is the financial and logistical infrastructure that sustains the network. Most threat networks are transnational in nature, drawing financial support, material support, and recruits from a worldwide audience.
  3. Decapitation. Decapitation is the removal of key nodes within the network that are functioning as leaders. Targeting leadership is designed to impact the C2 of the network. Detailed analysis of the network may provide the staff with an indication of how long the network will require to replace leadership once they are removed from the network. From a historical perspective, the removal of a single leader from an adaptive human network has resulted in short-term effects on the network.

When targeting the nodes, links, and activities of threat networks, the JFC should consider the second- and third-order effects on friendly and neutral groups that share network and cell functions. Additionally, the ripple effects throughout the network and its cells should be considered.

  1. Fragmentation. A fragmentation strategy is the surgical removal of key nodes of the network that produces a fragmented effect on the network with the intent to disrupt the network’s ability to function. Although fragmenting the network will result in immediate effects, the staff must consider when this type of strategy is appropriate. Elimination of nodes within the network may have impacts on collection efforts, depending on the node being targeted.
  2. Counter-Messaging. Threat networks form around some type of catalyst that motivates individuals from a receptive audience to join a network. The challenging aspect of a catalyst is that individuals will interpret and relate to it in their own manner. There may be some trends among members of the network that relate to the catalyst in a similar manner; this perspective is not accurate for all members of the network. Threat networks have embraced the ability to project their own messages using a number of social media sites. These messages support their objectives and are used as a recruiting tool for new members. Countering the threat network’s messages is one aspect of countering a threat network.
  3. Targeting
  4. At the tactical level, the focus is on executing operations targeting nodes and links. Accurate, timely, and relevant intelligence supports this effort. Tactical units use this intelligence along with their procedures to conduct further analysis, template, and target networks.
  5. Targeting of threat network CVs is driven by the situation, the accuracy of intelligence, and the ability of the joint force to quickly execute various targeting options to create the desired effects. In COIN operations, high-priority targets may be individuals who perform tasks that are vulnerable to detection/exploitation and impact more than one CR.

Timing is everything when attacking a network, as opportunities for attacking identified CVs may be limited.

  1. CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

(1) Dynamic Targeting. A time-sensitive targeting cell consisting of operations and intelligence personnel with direct access to engagement means and the authority to act on pre-approved targets is an essential part of any network targeting effort. Dynamic targeting facilitates the engagement of targets that have been identified too late or not selected in time to be included in deliberate targeting and that meet criteria specific to achieving the stated objectives.

(2) Deliberate Targeting. The joint fires cell is tasked to look at an extended timeline for threats and the overall working of threat networks. With this type of deliberate investigation into threat networks, the cell can identify catalysts to the threat network’s operations and sustainment that had not traditionally been targeted on a large scale.

  1. The joint targeting cycle supports the development and prosecution of threat networks. Land and maritime force commanders normally use an interrelated process to enhance joint fire support planning and interface with the joint targeting cycle known as the decide, detect, deliver, and assess (D3A) methodology. D3A incorporates the same fundamental functions of the joint targeting cycle as the find, fix, track, target, engage, and assess (F2T2EA) process and functions within phase 5 of the joint targeting cycle. The D3A methodology facilitates synchronizing maneuver, intelligence, and fire support. The F2T2EA and F3EAD methodologies support dynamic targeting. While the F3EAD model was developed for personality-based targeting, it can only be applied once the JFC has approved the joint integrated prioritized target list. Depending on the situation, multiple methodologies may be required to create the desired effect.
  2. F3EAD. F3EAD facilitates the targeting not only of individuals when timing is crucial, but also more importantly the generation of follow-on targets through timely exploitation and analysis. F3EAD facilitates synergy between operations and intelligence as it refines the targeting process. It is a continuous cycle in which intelligence and operations feed and support each other. It assists to:

(1) Analyze the threat network’s ideology, methodology, and capabilities; helps template its inner workings: personnel, organization, and activities.

(2) Identify the links between enemy CCs and CRs and observable indicators of enemy action.

(3) Focus and prioritize dedicated intelligence collection assets.

(4) Provide the resulting intelligence and products to elements capable of rapidly conducting multiple, near-simultaneous attacks against the CVs.

(5) Provide an ability to visualize the OE and array and synchronize forces and capabilities.

  1. The F3EAD process is optimized to facilitate targeting of key nodes and links tier I (enemy top-level leadership, for example) and tier II (enemy intermediaries who interact with the leaders and establish links with facilitators within the population). Tier III individuals (the low-skilled foot soldiers who are part of the threat) may be easy to reach and provide an immediate result but are a distraction to success because they are easy to replace and their elimination is only a temporary inconvenience to the enemy. F3EAD can be used for any network function that is a time-sensitive target.
  2. The F3EAD process relies on the close coordination between operational planners and intelligence collection and tactical execution. Tactical forces should be augmented by a wide array of specialists to facilitate on-site exploitation and possible follow-on operations. Exploitation of captured materials and personnel will normally involve functional specialists from higher and even national resources. The goal is to quickly conduct exploitation and facilitate follow-on targeting of the network’s critical nodes.
  3. Targeting Considerations
  4. There is no hard-and-fast rule for allocating network targets by echelon. The primary consideration is how to create the desired effect against the network as a whole.

Generally network targets fall into one of three categories: individual targets, group targets, and organizational targets.

  1. Anobjectiveofnetworktargetingmaybetodenythethreatitsfreedomofactionand maneuver by maintaining constant pressure through unpredictable actions against the network’s leadership and critical functional nodes. It is based on selecting the right means or combination thereof to neutralize the target while minimizing collateral effects.
  2. While material targets can be disabled, denied, destroyed, or captured, humans and their interrelationships or links are open to a broader range of engagement options by friendly forces. For example, when the objective is to neutralize the influence of a specific group, it may require a combination of tasks to create the desired effect.
  3. Lines of Effort by Phase
  4. Targeting is a continuous and evolving process. As the threat adjusts to joint force activities, joint force intelligence collection and targeting must also adjust. Employing a counter-resource (logistical, financial, and recruiting) approach should increase the amount of time it will take for the organization to regroup. It may also force the threat to employ its hidden resources to fill the gaps, thus increasing the risk of detection and exploitation. During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network (see Figure V-6). However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE. The simplified model in Figure V-6 is illustrative rather than a list of specific planning steps.
  5. During phase 0, analysis provides a broad description of the structure of the underlying threat organization, identifies the critical functions and nodes, and identifies the relationships between the threat’s activities and the greater society.

These forces provide a foundation of information about the region to include very specific information that falls into the categories of PMESII. Actions against the network may include targeting of the threat’s transnational resources (money, supply, safe havens, recruiting); identifying key leadership; providing resources to facilitate PNs and regional efforts; shaping international and national populations’ opinions of friendly, neutral, and threat groups; and isolating the threat from transnational allies.

  1. During phase I, CTN activities seek to provide a more complete picture of the conditions in the OE. Forces already employed in theater may be leveraged as sources of information to help build a more detailed picture. New objectives may emerge as part of phase I, and forces deployed to help achieve those objectives contribute to the developing common operational picture. A network analysis is conducted to identify a target array that will keep the threat network off balance through multi-nodal attack operations.
  2. During phase II, CTN activities concentrate on developing previously identified targets, position intelligence collection to exploit effects, and continue to refine the description of the threat and its supporting network.
  3. During phase III, CTN activities are characterized by increased physical contact and a sizable ramp-up in a variety of intelligence and information collection assets. The focus is on identifying, exploiting, and targeting the clandestine core of the network. Intelligence collection assets and specialized analytical capabilities provide around the clock support to committed forces. Actions against the network continue and feature a ramp-up in resource denial; key leaders and activities are targeted for elimination; and constant multi-nodal pressure is maintained. Activities continue to convince neutral networks of the benefits of supporting the government and dissuade threat sympathizers from providing continued support to threat networks. Ultimately, the network is isolated from support and its ability to conduct operations is severely diminished.
  4. During phase IV, CTN activities focus on identifying, exploiting, and targeting the clandestine core of the network for elimination. Intelligence collection assets and specialized analytical capabilities continue to provide support to committed forces; the goal is to prevent the threat from recovering and regrouping.
  5. During phase V, CTN activities continue to identify, exploit, and target the clandestine core of the network for elimination and to identify the threat network’s attempts to regroup and reestablish control.
  6. Theater Concerns in Countering Threat Networks
  7. Many threat networks are transnational, recruiting, financing, and operating on a global basis. These organizations cooperate on a global basis when necessary to further their respective goals.
  8. In developing their CCMD campaign plans, CCDRs need to be aware of the complex relationships that characterize networks and leverage whole-of-government resources to identify and analyze networks to include their relationships with or part of known friendly, neutral, or threat networks. Militaries are interested in the activities of criminal organizations because these organizations provide material support to insurgent and terrorist organizations that also conduct criminal activities (e.g., kidnapping, smuggling, extortion). By tracking criminal organizations, the military may identify linkages (material and financial) to the threat network, which in turn might become a target.
  9. Countering Threat Networks Through Military Operations and Activities

Some threat networks may prefer to avoid direct confrontation with law enforcement and military forces. Activities associated with military operations at any level of conflict can have a direct or indirect impact on threats and their supporting networks.

  1. Operational Approaches to Countering Threat Networks
  2. There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple LOOs and LOEs may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

In all CTN activities, lethal actions directed against the network should also be combined with nonlethal actions to support the legitimate government and persuade neutrals to reject the adversary.

 

  1. Effective CTN takes a deep understanding of the interrelationships between all the networks within an operational area, determining the desired effect(s) against each network, and nodes, and gathering and leveraging all available resources and capabilities to execute operations.

A CHANGING ENVIRONMENT—THE CONVERGENCE OF THREAT NETWORKS

Transnational organized crime penetration of states is deepening, leading to co-option of government officers in some nations and weakening of governance in many others. Transnational organized crime networks insinuate themselves into the political process through bribery and in some cases have become alternate providers of governance, security, and livelihoods to win popular support.

In fiscal year 2010, 29 of the 63 top drug trafficking organizations identified by the Department of Justice had links to terrorist organizations. While many terrorist links to transnational organized crime are opportunistic, this nexus is dangerous, especially if it leads a transnational organized crime network to facilitate the transfer of weapons of mass destruction transportation of nefarious actors or materials into the US.

CHAPTER VI

ASSESSMENTS

Commanders and their staffs will conduct assessments to determine the impact CTN activities may have on the targeted networks. Other networks, including friendly and neutral networks, within the OE must also be considered during planning, operations, and assessments.

Threat networks will adapt visibly and invisibly even as collection, analysis, and assessments are being conducted, which is why assessments over time that show trends are much more valuable in CTN activities than a single snapshot over a short time frame.

  1. Complex Operational Environments

Complex geopolitical environments, difficult causal associations, and the challenge of both quantitative and qualitative analysis to support decision making all complicate the assessment process. When only partially visible threat networks are spread over large geographic areas, among the people, and are woven into friendly and neutral networks, assessing the effects of joint force operations requires as much operational art as the planning process.

  1. Assessment of Operations to Counter Threat Networks
  2. CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed.

Dynamic interaction among friendly, threat, and neutral networks makes assessing many aspects of CTN activities difficult. As planners assess complex human behaviors, they draw on multiple sources across the OE, including analytical and subjective measures, which support an informed assessment.

  1. Real-time network change detection is extremely challenging, and conclusions with high levels of confidence are rare. Since threat networks are rapidly adaptable, technological

systems used to support collection often struggle at monitoring change. Additionally, the large amounts of information collected require resources (people) and time for analysis. It is difficult to determine how networks change, and even more challenging to determine whether network changes are the result of joint force actions and, if so, which actions or combined actions are effective. A helpful indicator used in assessment comes when threat networks leverage social networks to coordinate and conduct operations, as it provides an opportunity to gain a greater understanding of the motivation and ideology of these networks. If intelligence analysts can tap into near real-time information from threat network entities, then that information can often be geospatially fused to create a better assessment. This is dependent on having access to accurate network data, the ability to analyze the data quickly, and the ability to detect deception.

  1. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for HUMINT. Collection of HUMINT is time-consuming and reliability of sources can be problematic, but if employed properly and cross-cued with other disciplines, it is extremely valuable in irregular warfare. Tactical unit reporting such as patrol debriefs and unit after action reports when assimilated across an OE may provide the most valuable information on assessing the impact of operations.

OSINT will often be more valuable in assessing operations against threat networks and be the single greatest source of intelligence.

  1. Operation Assessment
  2. Theassessmentprocessisacontinuouscyclethatseekstoobserveandevaluatethe ever-changing OE and inform decisions about the future, making operations more effective. Base-lining is critical in phase 0 and the initial JIPOE process for assessments to be effective.

Assessments feed back into the JIPOE process to maintain tempo in the commander’s decision cycle. This is a continuous process, and the baseline resets for each cycle. Change is constant within the complex OE and when operating against multiple, adaptive, interconnected threat networks.

  1. Commanders establish priorities for assessment through their planning guidance, commander’s critical information requirements (CCIRs), and decision points. Priority intelligence requirements, a component of CCIR, detail exactly what data the intelligence collection plan should be seeking to inform the commander regarding threat networks.

CTN activities may require assessing multiple MOEs and measures of performance (MOPs), depending on threat network activity. As an example, JFCs may choose to neutralize or disrupt one type of network while conducting direct operations against another network to destroy it.

  1. Assessment precedes and guides every operation process activity and concludes each operation or phase of an operation. Like any cycle, assessment is continuous. The assessment process is not an end unto itself; it exists to inform the commander and improve the operation’s progress
  2. Integrated successfully, assessment in CTN activities will:

(1) Depict progress toward achieving the commander’s objectives and attaining the commander’s end state.

(2) Help in understanding how the OE is changing due to the impact of CTN activities on threat network structures and functions.

(3) Informthecommander’sdecisionmakingforoperationaldesignandplanning, prioritization, resource allocation, and execution.

(4) Produce actionable recommendations that inform the commander where to devote resources along the most effective LOOs and LOEs.

  1. Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate.

Multi-Service Tactics, Techniques, and Procedures for Operation Assessment

  1. Organize the Data

(1) Based on the OE and the operation plan or campaign plan, the commander and staff develop objectives and assessment criteria to determine progress. The organize activity includes ensuring the indicators are included within the collection plan, information collected and then analyzed by the intelligence section is organized using an information management plan, and that information is readily available to the staff to conduct the assessment. Multiple threat networks within an OE may require multiple MOPs, MOEs, metrics, and branches to the plan. Threat networks operating collaboratively or against each other complicate the assessment process. If threat networks conduct operations or draw resources from outside the operational area, there will be a greater reliance on other CCDRs or interagency partners for data and information.

Within the context of countering threat networks, example objective, measures of effectiveness (MOEs), and indicators could be:

Objective: Threat network resupply operations in “specific geographic area” are disrupted.

MOE: Suppliers to threat networks cease providing support. Indicator 1: Fewer trucks leaving supply depots.

Indicator 2: Guerrillas/terrorists change the number of engagements or length of engagement times to conserve resources.

Indicator 3: Increased threat network raids on sites containing resources they require (grocery stores, lumber yards, etc.)

(2) Metrics must be collectable, relevant, measurable, timely, and complementary. The process uses assessment criteria to evaluate task performance at all levels of warfare to determine progress of operations toward achieving objectives. Both qualitative and quantitative analyses are required. With threat networks, direct impacts alone may not be enough, requiring indirect impacts for a holistic assessment. Operations against a network’s financial resources may be best judged by analyzing the quality of equipment that they are able to deploy in the OE.

  1. Analyze the Data

(1) Analyzing data is the heart of the assessment process for CTN activities. Baselining is critical to support analysis. Baselining should not only be rooted in the initial JIPOE, but should go back to GCC theater intelligence collection and shaping operations. Understanding how threat networks formed and adapted prior to joint force operations provides assessors a significantly better baseline and assists in developing indicators.

(2) Data analysis seeks to answer essential questions:

(a) What happened to the threat network(s) as a result of joint force operations? Specific examples may include the following: How have links changed? How have nodes been affected? How have relationships changed? What was the impact on structure and functions? Specifically, what was the impact on operations, logistics, recruiting, financing, and propaganda?

(b) What operations caused this effect directly or indirectly? (Why did it happen?) It is likely that multiple instruments of national power efforts across several LOOs and LOEs impacted the threat network(s), and it is equally unlikely that a direct cause and effect is discernible.

Analysts must be aware of the danger of searching for a trend that may not be evident. Events may sometimes have dramatic effects on threat networks, but not be visible to outside/foreign/US observers.

(c) Whatarethelikelyfutureopportunitiestocounterthethreatnetworkand what are the risks to neutral and friendly networks? CTN activities should target CVs. Interdiction operations, for example, may create future opportunities to disrupt finances. Cyberspace operations may target Internet propaganda and create opportunities to reduce the appeal of threat networks to neutral populations.

(d) What needs to be done to apply pressure at multiple points across the instruments of national power (diplomatic, informational, military, and economic) to the targeted threat networks to attain the JFC’s desired military end state?

(3) Military units find stability tasks to be the most challenging to analyze since they are conducted among a civilian population. Adding a social dynamic complicates use of mathematical and deterministic formulas when human nature and social interactions play a major part in the OE. Overlaps between threat networks and neutral networks, such as the civilian population, complicate assessments and the second- and third-order effects analysis.

(4) The proximate cause of effects in complex situations can be difficult to determine. Even direct effects in these situations can be more difficult to create, predict, and measure, particularly when they relate to moral and cognitive issues (such as religion and the “mind of the adversary,” respectively). Indirect effects in these situations often are difficult to foresee. Indirect effects often can be unintended and undesired since there will always be gaps in our understanding of the OE. Unpredictable third-party actions, unintended consequences of friendly operations, subordinate initiative and creativity, and the fog and friction of conflict will contribute to an uncertain OE. Simply determining undesired effects on threat networks requires a greater degree of critical thinking and qualitative analysis than traditional operations. Undesired effects on neutral and friendly networks cannot be ignored.

(5) Statistical analysis is necessary and allows large volumes of data to be analyzed, but critical thinking must precede its use and qualitative analysis must accompany any conclusions. SNA is a form of statistical analysis on human networks that has proven to be a particularly valuable tool in understanding network dynamics and in showing network changes over time but it must be complemented by other types of analysis and traditional intelligence analysis. It can support the JIPOE process as well as the planning, targeting, and assessment processes. SNA requires significant data collection and since threat networks are difficult to collect on and may adapt unseen, it must be used in conjunction with other tools.

  1. Communicate the Assessment

(1) The assessment of CTN activities is only valuable to the commander and other participants if it is effectively communicated in a format that allows for rapid changes to LOOs/LOEs and operational and tactical actions for CTN activities.

(2) Communicating the CTN assessment clearly and concisely with sufficient information to support the staff’s recommendations, but not too much trivial detail, is challenging.

(3) Well-designed CTN assessment products show changes in indicators describing the OE and the performance of organizations as it related to CTN activities.

 

APPENDIX A

DEPARTMENT OF DEFENSE COUNTER THREAT FINANCE 1. Introduction

  1. JFCs face adaptive networked threats that rapidly adjust their operations to offset friendly force advantages and pose a wide array of challenges across the range of military operations.

CTN activities are a focused approach to understanding and operating against adaptive network threats such as terrorism, insurgency and organized crime. CTF refers to the activities and actions taken by the JFC to deny, disrupt, destroy, or defeat the generation, storage, movement, and use of assets to fund activities that support a threat network’s ability to negatively affect the JFC’s ability to attain the desired end state. Disrupting threat network finances decreases the threat network’s ability to achieve their objectives. That can range from sophisticated communications systems to support international propaganda programs, to structures to facilitate obtaining funding from foreign based sources, to foreign based cell support, to more local objectives to pay, train, arm, feed, and equip fighters. Disrupting threat network finances decreases their ability to conduct operations that threaten US personnel, interests, and national security.

  1. CTF activities against threat networks should be conducted with an understanding of the OE, in support of the JFC’s objectives, and nested with other counter threat network operations, actions, and activities. CTF activities cause the threat network to adjust its financial operations by disrupting or degrading its methods, routes, movement, and source of revenue. Understanding that financial elements are present at all levels of a threat network, CTF activities should be considered when developing MOEs during planning with the intent of forecasting potential secondary and tertiary effects.
  2. Effective CTF operations depend on developing an understanding of the functional organization of the threat network, the threat network’s financial capabilities, methods of operation, methods of communication, and operational areas, and upon detecting how revenue is raised, moved, stored, and used.
  3. Key Elements of Threat Finance
  4. Threatfinanceisthemannerinwhichadversarialgroupsraise,move,store,anduse funds to support their activities. Following the money and analyzing threat finance networks is important to:

(1) Identify facilitators and gatekeepers.
(2) Estimate threat networks’ scope of funding.
(3) Identify modus operandi.
(4) Understand the links between financial networks.
(5) Determine geographic movement and location of financial networks.

(6) Capture and prosecute members of threat networks.

  1. Raising Money. Fund-raising through licit and illicit channels is the first step in being able to carry out or support operations. This includes raising funds to pay for such mundane items as food, lodging, transportation, training, and propaganda. Raising money can involve network activity across local and international levels. It is useful to look at each source of funding as separate nodes that fit into a much larger financial network. That network will have licit and illicit components.

(1) Funds can be raised through illicit means, such as drug and human trafficking, arms trading, smuggling, kidnapping, robbery, and arson.

(2) Alternatively, funds can be raised through ostensibly legal channels. Threat networks can receive funds from legitimate humanitarian and business organizations and individual donations.

(3) Legitimate funds are coming led with illicit funds destined for threat networks, making it extremely difficult for governments to track threat finances in the formal financial system. Such transactions are perfectly legal until they can be linked to a criminal or terrorist act. Therefore, these transactions are extremely hard to detect in the absence of other indicators or through the identification of the persons involved.

  1. Moving Money. Moving money is one of the most vulnerable aspects of the threat finance process. To make the illicit money usable to threat networks it must be laundered. This can be done through the use of front companies, legitimate businesses, cash couriers, or third parties that may be willing to take on the risks in exchange for a cut of the profits. These steps are called “placement” and “layering.”

(1) During the placement stage, the acquired funds or assets are placed into a local, national, or international financial system for future use. This is necessary if the generated funds or assets are not in a form useable by their recipient, e.g., converting cash to wire transfers or checks.

(2) During the layering stage, numerous transactions are conducted with the assets or proceeds to create distance between the origination of the funds or assets and their eventual destination. Distance is created by moving money through several accounts, businesses or people, or by repeatedly converting the money or asset into a different form.

  1. Storing Money. Money or goods that have successfully been moved to a location that can be accessed by the threat network may need to be stored until it is ready to be spent.
  2. Using Money. Once a threat network has raised, moved, and stored their money, they are able to spend it. This is called “integration.” Roughly half of the money that was initially raised will go to operational expenses and the cost of laundering the money to convert it to useable funds. During integration, the funds or assets are placed at the disposal of the threat network for their utilization or re-investment into other licit and illicit operations.
  3. Planning Considerations
  4. CTF requires the integration of the efforts of disparate organizations in a whole-of- government approach in a complex environment. Joint operation/campaign plans and operation orders should be crafted to recognize that the core competencies of various agencies and military activities are coordinated and resources integrated, when and where appropriate, with those of others to achieve the operational objectives.
  5. The JFC and staff need to understand the impact that changes to the OE will have on CTF activities. The adaptive nature of threat networks will force changes to the network’s business practices and operations based on the actions of friendly networks within the OE. This understanding can lead to the creation of a more comprehensive, feasible, and achievable plan.
  6. CTF planning will identify the organizations and entities that will be required to conduct CTF action and activities.
  7. Intelligence Support Requirements
  8. CTF activities require detailed, timely, and accurate intelligence of threat networks’ financial activities to inform planning and decision making. Analysts can present the JFC with a reasonably accurate scope of the threat network’s financial capabilities and impact probabilities if they have a thorough understanding of the threat network’s financial requirements and what the threat network is doing to meet those requirements.
  9. JFCs should identify intelligence requirements for threat finance-related activities to establish collection priorities prior to the onset of operations.
  10. Intelligence support can focus on following the money by tracking the generation, storage, movement, and use of funds, which may provide additional insight into threat network leadership activities and other critical components of the threat network’s financial business practices. Trusted individuals or facilitators within the network often handle the management of financial resources. These individuals and their activities may lead to the identification of CVs within the network and decisive points for the JFC to target the network.
  11. Operation
  12. DOD may not always be the lead agency for CTF. Frequently the efforts and products of CTF analysis will be used to support criminal investigations or regulatory sanction activities, either by the USG or one of its partners. This can prove advantageous as contributions from other components can expand and enhance an understanding of threat financial networks. Threat finance activities can have global reach and are generally not geographically constrained. At times much of the threat finance network, including potentially key nodes, may extend beyond the JFC’s operational area.
  13. Military support to CTF is not a distinct type of military operation; rather it represents military activities against a specific network capability of business and financial processes used by an adversary network.

(1) Major Operations. CTF can reduce or eliminate the adversary’s operational capability by reducing or eliminating their ability to pay troops and procure weapons, supplies, intelligence, recruitment, and propaganda capabilities.

(2) Arms Control and Disarmament. CTF can be used to disrupt the financing of trafficking in small arms, IED or WMD proliferation and procurement, research to develop more lethal or destructive weapons, hiring technical expertise, or providing physical and operational security.

(6) DOD Support to CD Operations. The US military may conduct training of PN/HN security and law enforcement forces, assist in the gathering of intelligence, and participate in the targeting and interception of drug shipments. Disrupting the flow of drug profits via C

(7) Enforcement of Sanctions. CTF encompasses all forms of value transfer to the adversary, not just currency. DOD organizations can provide assistance to organizations that are interdicting the movement of goods and/or any associated value remittance as a means to enforce sanctions.

(8) COIN. CTF can be used to counter, disrupt, or interdict the flow of value to an insurgency. Additionally, CTF can be used against corruption, as well as drug and other criminal revenue-generating activities that fund or fuel insurgencies and undermine the legitimacy of the HN government. In such cases, CTF is aimed at insurgent organizations as well as other malevolent actors in the environment.

(9) Peace Operations. In peace operations, CTF can be used to stem the flow of external sources of support to conflicts to contain and reduce the conflict.

  1. Military support tasks to CTF can fall into four broad categories:

(1) Support civil agency and HN activities (including law enforcement):

(a) Provide Protection. US military forces may provide overwatch for law enforcement or PN/HN military CTF activities.

(b) Provide Logistics. US military forces may provide transportation, especially tactical movement-to-objective support, to law enforcement or PN/HN military CTF activities.

(c) Provide Command, Control, and Communications Support. US military forces may provide information technology and communications support to civilian agencies or PN/HN CTF personnel. This support may include provision of hardware and software, encryption, bandwidth, configuration support, networking, and account administration and cybersecurity.

(2) Direct military actions:

(a) Capture/Kill. US military forces may, with the support of mission partners as necessary, conduct operations to capture or kill key members of the threat finance network.

(b) Interdiction of Value Transfers. US military forces may, with the support of mission partners, conduct operations to interdict value transfers to the threat network as necessary. This may be a raid to seize cash from an adversary safe house, foreign exchange house, hawala or other type of informal remittance systems; seizure of electronic media including mobile banking systems commonly known as “red sims” and computer systems that contain data support payment and communication data in the form of cryptocurrency or exchanges in the virtual environment; interdiction to stop the smuggling of

goods used in trade-based money laundering; or command and control flights to provide aerial surveillance of drug-smuggling aircraft in support of law enforcement interdiction.

(c) Training HN/PN Forces. US military forces may provide training to PN/HN CTF personnel under specific authorities.

(3) Intelligence Collection. US military forces may conduct all-source intelligence operations, which will deal primarily with the collection, exploitation, analysis, and reporting of CTF information. These operations may involve deploying intelligence personnel to collect HUMINT and the operations of ships at sea and forces ashore to collect SIGINT, OSINT, and GEOINT.

(4) Operations to Generate Information and Intelligence. Occasionally, US military forces may conduct operations either with SOF or conventional forces designed to provoke a response by the adversary’s threat finance network for the purpose of collecting information or intelligence on that network. These operations are pre-planned and carefully coordinated with the intelligence community to ensure the synchronization and posture of the collection assets as well as the operational forces.

  1. Threat Finance Cells

(1) Threatfinancecellscanbeestablishedatanylevelbasedonavailablepersonnel resources. Expertise on adversary financial activities can be provided through the creation of threat finance cells at brigade headquarters and higher. The threat finance cell would include a mix of analysts and subject matter experts on law enforcement, regulatory matters, and financial institutions that would be drawn from DOD and civil USG agency resources. The threat finance cell’s responsibilities vary by echelon. At division and brigade, the threat finance cell:

(a) Provides threat finance expertise and advice to the commander and staff.

(b) Assiststheintelligencestaffinthedevelopmentofintelligencecollection priorities focused on adversary financial and support systems that terminate in the unit’s operational area.

(c) Consolidatesinformationonpersonsprovidingdirectorindirectfinancial, material and logistics support to adversary organizations in the unit’s operational area.

(d) Provides information concerning adversary exploitation of US resources such as transportation, logistical, and construction contractors working in support of US facilities; exploitation of NGO resources; and exploitation of supporting HN personnel.

(e) Identifies adversary organizations coordinating or cooperating with local criminals, organized crime, or drug trafficking organizations.

(f) Providesassessmentsoftheadversary’sfinancialviability¾abilitytofund, maintain, and grow operations¾and the implications for friendly operations.

(g) Developstargetingpackagerecommendationsforadversaryfinancialand logistics support persons for engagement by lethal and nonlethal means.

(h) Notifies commanders when there are changes in the financial or support operations of the adversary organization, which could indicate changes in adversary operating tempo or support capability.

(i) Coordinatesandsharesinformationwithotherthreatfinancecellstobuilda comprehensive picture of the adversary’s financial activities.

(2) At the operational level, the joint force J-2 develops and maintains an understanding of the OE, which includes economic and financial aspects. If established, the threat finance cell supports the J-2 to develop and maintain an understanding of the economic and financial environment of the HN and surrounding countries to assist in the detection and tracking of illicit financial activities, understanding where financial support is coming from, how that support is being moved into the area of operation and how that financial support is being used. The threat finance cell:

(a) Works with the J-2 to develop threat finance-related priority intelligence requirements and establish threat finance all-source intelligence collection priorities. The threat finance cell assists the J-2 in the detection, identification, tracking, analysis, and targeting of adversary personnel and networks associated with financial support across the operational area.

(b) The threat finance cell coordinates with tactical and theater threat finance cells and shares information with those entities as well as multinational forces, HN, and as appropriate and in coordination with the joint force J-2, the intelligence community.

(c) The threat finance cell, in coordination with the J-2, establishes a financial network picture for all known adversary organizations in the operational area; establishes individual portfolios or target packages for persons identified as providing financial or material support to the adversary’s organizations in the operational area; identifies adversary financial TTP for fund-raising, transfer mechanisms, distribution, management and control, and disbursements; and identifies and distributes information on fund-raising methods that are being used by specific groups in the area of operations. The threat finance cell can also:

  1. Identify specific financial institutions that are involved with or that are providing financial support to the adversary and how those institutions are being exploited by the adversary.
  2. Provide CTF expertise on smuggling and cross border financial and logistics activities.
  3. Establish and maintain information on adversary operating budgets in the area of operation to include revenue streams, operating costs, and potential additions, or depletions, to strategic or operational reserves.
  4. Targets identified by the operational-level threat finance cell are shared with the tactical threat finance cells. This allows the tactical threat finance cells to support and coordinate tactical units to act as an action arm for targets identified by the operational-level CTF organization, and coordinate tactical intelligence assets and sources against adversary organizations identified by the operational-level CTF organization.
  5. Multi-echelon information sharing is critical to unraveling the complexities of an adversary’s financial infrastructure. Operational-level CTF organizations require the detailed financial intelligence that is typically obtained by resources controlled by the tactical organizations.
  6. The operational-level threat finance cell facilitates the provision of support by USG and multinational organizations at the tactical level. This is especially true for USG department and agencies that have representation at the American Embassy.

(3) Tactical-level threat finance cells will require support from the operational level to obtain HN political support to deal with negative influencers that can only be influenced or removed by national-level political leaders, including governors, deputy governors, district leads, agency leadership, chiefs of police, shura leaders, elected officials and other persons serving in official positions; HN security forces; civilian institutions; and even NGOs/charities that may be providing the adversary with financial and logistical support.

(4) The threat finance cell should be integrated into the battle rhythm. Battle rhythm events should follow the following criteria:

(a) Name of board or cell: Descriptive and unique.

(b) Lead staff section: Who receives, compiles, and delivers information.

(c) When/where does it meet in battle rhythm: Allocation of resources (time and facilities), and any collaborative tool requirements.

(d) Purpose: Brief description of the requirement.

(e) Inputs required from: Staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams required to provide products (once approved, these become specified tasks).

(f) When? Suspense for inputs.

(g) Output/process/product: Products and links to other staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams.

(h) Time of delivery: When outputs will be available.

(i) Membership: Who has to attend (task to staff to provide participants and representatives).

 

  1. Assessment
  2. JFCs should know the importance and use of CTF capabilities within the context of measurable results for countering adversaries and should embed this knowledge within their staff. By assessing common elements found in adversaries’ financial operations, such as composition, disposition, strength, personnel, tactics, and logistics, JFCs can gain an understanding of what they might encounter while executing an operation and identify vulnerabilities of the adversary. Preparing a consolidated, whole-of-government set of metrics for threat finance will be extremely challenging.
  3. Metricsonthreatfinancemayappeartobeoflittlevaluebecauseitisverydifficult to obtain fast results or intelligence that can is immediately actionable. Actions against financial networks may take months to prepare, organize, and implement, due to the difficulty of collecting relevant detailed information and the time lags associated with processing, analysis, and reporting findings on threat financial networks.
  4. The JFC’s staff should assess the adversary’s behaviors based on the JFC’s desired end state and determine whether the adversary’s behavior is moving closer to that end state.
  5. The JFC and staff should consult with participating agencies and nations to establish a set of metrics which are appropriate to the mission or LOOs assigned to the CTF organization.

APPENDIX B

THE CONVERGENCE OF ILLICIT NETWORKS

  1. The convergence of illicit networks (e.g., criminals, terrorists, and insurgents) incorporates the state or degree to which two or more organizations, elements, or individuals approach or interrelate. Conflict in Iraq and Afghanistan has seen a substantial increase in the cooperative arrangements of illicit networks to further their respective interests. From the Taliban renting their forces out to provide security for drug operations to al-Qaida using criminal organizations to smuggle resources, temporary cooperative arrangements are now a routine aspect of CTN operations.
  2. The US intelligence community has concluded that transnational organized crime has grown significantly in size, scope, and influence in recent years. A public summary of the assessment identified a convergence of terrorist, criminal, and insurgent networks as one of five key threats to US national security. Terrorists and insurgents increasingly have and will continue to turn to crime to generate funding and will acquire logistical support from criminals, in part because of successes by USG departments and agencies and PNs in attacking other sources of their funding, such as from state sponsors. In some instances, terrorists and insurgents prefer to conduct criminal activities themselves; when they cannot do so, they turn to outside individuals and facilitators. Some criminal organizations have adopted terrorist organizations’ practice of extreme and widespread violence in an overt effort to intimidate governments and populations at various levels.
  3. To counter threat networks, it is imperative to understand the converging nature of the relationship among terrorist groups, insurgencies, and transnational criminal organizations. The proliferation of these illicit networks and their activities globally threaten US national security interests. Together, these groups not only destabilize environments through violence, but also become dominant actors in shadow economies, distorting market forces. Indications are that although the operations and objectives of criminal groups, insurgents, and terrorists differ, these groups interact on a regular basis for mutually beneficial reasons. They each pose threats to state sovereignty. They share the common goals of ensuring that poorly governed and post-conflict countries have ineffective laws and law enforcement, porous borders, a culture of corruption, and lucrative criminal opportunities.

Organized crime has been traditionally treated as a law enforcement rather than national security concern. The convergence of organized criminal networks with the other non-state actors requires a more sophisticated, interactive, and comprehensive response that takes into account the dynamics of the relationships and adapts to the shifting tactics employed by the various threat networks.

  1. Mounting evidence suggests that the modus operandi of these entities often diverges and the interactions among them are on the rise. This spectrum of convergence (Figure B-1) has received increasing attention in law enforcement and national security policy-making circles. Until recently, the prevalent view was that terrorists and insurgents were clearly distinguishable from organized criminal groups by their motivations and the methods used to achieve their objectives. Terrorist and insurgent groups use or threaten to use extreme violence to attain political ends, while organized criminal groups are primarily motivated by profit. Today, these distinctions are no longer useful for developing effective diplomatic, law enforcement, and military strategies, simply because the lines between them have become blurred, and the security issues have become intertwined.

The convergence of organized criminal networks and other illicit non-state actors, whether for short-term tactical partnerships or broader strategic imperatives, requires a much more sophisticated response or unified approach, one that takes into account the evolving nature of the relationships as well as the environmental conditions that draw them together.

  1. The convergence of illicit networks has provided law enforcement agencies with a broader mandate to combat terrorism. Labeling terrorists as criminals undermines the reputation of terrorists as freedom fighters with principles and a clear political ideology, thereby hindering their ability to recruit members or raise funds.

just as redefining terrorists as criminals damages their reputation, ironically it might prove to be useful at other times to redefine criminals as terrorists, such as in the case of the Haqqani network in Afghanistan. For instance, this change in term might make additional resources available to law enforcement agencies, such as those of the military or the intelligence services, thereby making law enforcement more effective.

  1. However, there are some limitations associated with the latter approach. The adage that a terrorist to one is a freedom fighter to another holds true. This difference of opinion therefore renders it difficult for states to cooperate in joint CT operations.
  2. The paradigm of fighting terrorism, insurgency, and transnational crime separately, utilizing distinct sets of authorities, tools, and methods, is not adequate to meet the challenges posed by the convergence of these networks into a criminal-terrorist-insurgency conglomeration. While the US has maintained substantial long-standing efforts to combat terrorism and transnational crime separately, the government has been challenged to evaluate whether the existing array of authorities, responsibilities, programs, and resources sufficiently responds to the combined criminal-terrorism threat. Common foreign policy options have centered on diplomacy, foreign assistance, financial actions, intelligence, military action, and investigations. At issue is how to conceptualize this complex illicit networks phenomenon and oversee the implementation of cross-cutting activities that span geographic regions, functional disciplines, and a multitude of policy tools that are largely dependent on effective interagency coordination and international cooperation.
  3. Terrorist Organizations
  4. Terrorism is the unlawful use of violence or threat of violence, often motivated by religious, political, or other ideological beliefs, to instill fear and coerce governments or societies in pursuit of goals that are usually political.
  5. In addition to increasing law enforcement capabilities for CT, the US, like many nations, has developed specialized, but limited, military CT capabilities. CT actions are activities and operations taken to neutralize terrorists and their organizations and networks to render them incapable of using violence to instill fear and coerce governments or societies to achieve their goals.
  6. Insurgencies
  7. Insurgency is the organized use of subversion and violence to seize, nullify, or challenge political control of a region. Insurgency uses a mixture of subversion, sabotage, political, economic, psychological actions, and armed conflict to achieve its political aims. It is a protracted politico-military struggle designed to weaken the control and legitimacy of an established government, a military occupation government, an interim civil administration, or a peace process while increasing insurgent control and legitimacy.
  8. COIN is a comprehensive civilian and military effort designed to simultaneously defeat and contain insurgency and address its root causes. COIN is primarily a political struggle and incorporates a wide range of activities by the HN government, of which security is only one element, albeit an important one. Unified action is required to successfully conduct COIN operations and should include all HN, US, and multinational partners.
  9. Of the groups designated as FTOs by DOS, the vast majority possess the characteristics of an insurgency: an element of the larger group is conducting insurgent type operations, or the group is providing assistance in the form of funding, training, or fighters to another insurgency. Colombia’s government and the Revolutionary Armed Forces of Colombia reached an agreement to enter into peace negotiations in 2012, taking another big step toward ending the 50-year old insurgency.
  10. The convergence of illicit networks contributes to the undermining of the fabric of society. Since the proper response to this kind of challenge is effective civil institutions, including uncorrupted and effective police, the US must be capable of deliberately applying unified action across all instruments of national power in assisting allies and PNs when asked.
  11. Transnational Criminal Organizations
  12. From the National Security Strategy, combating transnational criminal and trafficking networks requires a multidimensional strategy that safeguards citizens, breaks the financial strength of criminal and terrorist networks, disrupts illicit trafficking networks, defeats transnational criminal organizations, fights government corruption, strengthens the rule of law, bolsters judicial systems, and improves transparency.
  13. Transnational criminal organizations are self-perpetuating associations of individuals that operate to obtain power, influence, monetary and/or commercial gains, wholly or in part by illegal means. These organizations protect their activities through a pattern of corruption and/or violence or protect their illegal activities through a transnational organizational structure and the exploitation of transnational commerce or communication mechanisms.

Transnational criminal networks are not only expanding operations, but they are also diversifying activities, creating a convergence of threats that has become more complex, volatile, and destabilizing. These networks also threaten US interests by forging alliances with corrupt elements of national governments and using the power and influence of those elements to further their criminal activities. In some cases, national governments exploit these relationships to further their interests to the detriment of the US.

  1. The convergence of illicit networks continues to grow as global sanctions affect the ability of terrorist organizations and insurgencies to raise funds to conduct their operations.
  2. Although drug trafficking still represents the most lucrative illicit activity in the world, other criminal activity, particularly human and arms trafficking, have also expanded. As a consequence, international criminal organizations have gone global; drug trafficking organizations linked to the Revolutionary Armed Forces of Colombia, for example, have agents in West Africa
  3. As the power and influence of these organizations has grown, their ability to undermine, corrode, and destabilize governments has increased. The links forged between these criminal groups, terrorist movements, and insurgencies have resulted in a new type of threat: ever-evolving networks that exploit permissive OEs and the seams and gaps in policy and application of unified action to conduct their criminal, violent, and politically motivated activities. Threat networks adapt their structures and activities faster than countries can combat their illicit activities. In some instances, illicit networks are now running criminalized states.

 

Drawing the necessary distinctions and differentiations [between coexistence, cooperation, and convergence] allows the necessary planning to begin in order to deal with the matter, not only in the Sahel, but across the globe:

By knowing your enemies, you can find out what it is they want. Once you know what they want, you can decide whether to deny it to them and thereby demonstrate the futility of their tactics, give it to them, or negotiate and give them a part of it in order to cause them to end their campaign. By knowing your enemies, you can make an assessment not just of their motives but also their capabilities and of the caliber of their leaders and their organizations.

It is often said that knowledge is power. However, in isolation knowledge does not enable us to understand the problem or situation. Situational awareness and analysis is required for comprehension, while comprehension and judgment is required for understanding. It is this understanding that equips decision makers with the insight and foresight required to make effective decisions.

Extract from Alda, E., and Sala, J. L., Links Between Terrorism, Organized Crime and Crime: The Case of the Sahel Region. Stability: International Journal of Security and Development, 10 September 2014.

 

APPENDIX C

COUNTERING THREAT NETWORKS IN THE MARITIME DOMAIN

  1. Overview

The maritime domain connects a myriad of geographically dispersed nodes of friendly, neutral, and threat networks, and serves as the primary conduit for nearly all global commerce. The immense size, dynamic environments, and legal complexities of this domain create significant challenges to establishing effective maritime governance in many regions of the world.

APPENDIX D

IDENTITY ACTIVITIES SUPPORT TO COUNTERING THREAT NETWORK OPERATIONS

  1. Identity activities are a collection of functions and actions that recognize and differentiate one person from another to support decision making. Identity activities include the collection of identity attributes and physical materials and their processing and exploitation.
  2. Identity attributes are the biometric, biographical, behavioral, and reputational data collected during encounters with an individual and across all intelligence disciplines that can be used alone or with other data to identify an individual. The processing and analysis of these identity attributes results in the identification of individuals, groups, networks, or populations of interest, and facilitates the development of I2 products that allow an operational commander to:

(1) Identify previously unknown threat identities.

(2) Positively link identity information, with a high degree of certainty, to a specific human actor.

(3) Reveal the actor’s pattern of life and connect the actor to other persons, places, materials, or events.

(4) Characterize the actor’s associates’ potential level of threat to US interests.

  1. I2 fuses identity attributes and other information and intelligence associated with those attributes collected across all disciplines. I2 and DOD law enforcement criminal intelligence products are crucial to commanders’, staffs’, and components’ ability to identify and select specific threat individuals as targets, associate them with the means to create desired effects, and support the JFC’s operational objectives.
  2. Identity Activities Considerations
  3. Identity activities leverage enabling intelligence activities to help identify threat actors by connecting individuals to other persons, places, events, or materials, analyzing patterns of life, and characterizing capability and intent to harm US interests.
  4. The joint force J-2 is normally responsible for production of I2 within the CCMD.

(1) I2 products are normally developed through the JIPOE process and provide detailed information about threat activity identities in the OE. All-source analysis, coupled with identity information, significantly enhances understanding of the location of threat actors and provides detailed information about threat activity and potential high-threat areas within the OE. I2 products enable improved force protection, targeted operations, enhanced intelligence collection, and coordinated planning.

  1. Development of I2 requires coordination throughout the USG and PNs, and may necessitate an intelligence federation agreement. During crises, joint forces may also garner support from the intelligence community through intelligence federation.
  2. Identity Activities at the Strategic, Operational, and Tactical Levels
  3. At the strategic level, identity activities are dependent on interagency and PN information and intelligence sharing, collaboration, and decentralized approaches to gain identity information and intelligence, provide analyses, and support the vetting the status (friendly, adversary, neutral, or unknown) of individuals outside the JFC’s area of operations who could have an impact on the JFC’s missions and objectives.
  4. At the operational level, identity activities employ collaborative and decentralized approaches blending technical capabilities and analytic abilities to provide identification and vetting of individuals within the AOR.
  5. At the tactical level, identity information obtained via identity activities continues to support the unveiling of anonymities. Collection and analysis of identity-related data helps tactical commanders further understand the OE and to decide on the appropriate COAs with regards to individual(s) operating within it; as an example, identity information often forms the basis for targeting packages. In major combat operations, I2 products help provide the identities of individuals moving about the operational area who are conducting direct attacks on combat forces, providing intelligence for the enemy, and/or disrupting logistic operations.
  6. US Special Operations Command and partners currently deploy land-based exploitation analysis centers to rapidly process and exploit biometric data, documents, electronic media, and other material to support I2 operations and gain greater situational awareness of threats.
  7. Policy and Legal Considerations for Identity Activities Support to Countering Threat Networks
  8. The authorities to collect, store, share, and use identity data will vary depending upon the AOR and the PNs involved in the CTN activities. Different countries have strict legal restrictions on the collection and use of personally identifiable information, and the JFC may need separate bilateral and/or multinational agreements to alleviate partners’ privacy concerns.
  9. Socio-cultural considerations also may vary depending upon the AOR. In some cultures, for example, a female subject’s biometric data may need to be collected by a female. In other cultures, facial photography may be the preferred biometric collection methodology so as not to cross sociocultural boundaries.
  10. Evidence-based operations and support to rule of law for providing identity data to HN law enforcement and judicial systems should be considered.

The prosecution of individuals, networks, and criminals relies on identity data. However, prior to providing identity data to HN law enforcement and judicial systems, one should consult with their staff judge advocate or legal advisor.

APPENDIX E

EXPLOITATION IN SUPPORT OF COUNTERING THREAT NETWORKS 1. Exploitation and the Joint Force

  1. Oneofthemajorchallengesconfrontingthejointforceistheaccurateidentification of the threat network’s key personnel, critical functions, and sources of supply. Threat networks often go to extraordinary lengths to protect critical information about the identity of their members and the physical signatures of their operations. These networks leave behind an extraordinary amount of potentially useful information in the form of equipment, documents, and even materials recovered from captured personnel. This information can lead to a deeper understanding of the threat network’s nodes, links, and functions and assists in continuous analysis and mapping of the network. If the friendly force has the ability to collect and analyze the materials found in the OE, then they can gain the insights needed to cause significant damage to the threat network’s operations. Exploitation provides a means to match individuals to events, places, devices, weapons, related paraphernalia, or contraband as part of a network attack.
  2. Conflicts in Iraq and Afghanistan have witnessed a paradigm shift in how the US military’s intelligence community supports the immediate intelligence needs of the deployed force and the type of information that can be derived from analysis of equipment, materials, documents, and personnel encountered on the battlefield. To meet the challenges posed by threat networks in an irregular warfare environment, the US military formed a deployable, multidisciplinary exploitation capability designed to provide immediate feedback on the tactical and operational relevance of threat equipment, materials, documents, and personnel encountered by the force. This expeditionary capability is modular, scalable, and includes collection, technical, and forensic exploitation and analytical capabilities linked to the national labs and the intelligence enterprise.
  3. Exploitation is accomplished through a combination of forward deployed and reachback resources to support the commander’s operational requirements.
  4. Exploitation employs a wide array of enabling capabilities and interagency resources, from forward deployed experts to small cells or teams providing scientific or technical support, or interagency or partner laboratories, and centers of excellence providing real-time support via reachback. Exploitation activities require detailed planning, flexible execution, and continuous assessment. Exploitation is designed to provide:

(1) Support to targeting, which occurs as a result of technical and forensic exploitation of recovered materials used to identify participants in the activity and provide organizational insights that are targetable.

(2) Support to component and material sourcing and tracking and supply chain interdiction uses exploitation techniques to determine origin, design, construction methods, components, and pre-cursors of threat weapons to identify where the materials originated, the activities of the threat’s logistical networks, and the local supply sources.

(3) Support to prosecution is accomplished when the results of the exploitation link individuals to illicit activities. When supporting law enforcement activities, recovered materials are handled with a chain of custody that tracks materials through the progressive stages of exploitation. The materials can be used to support detainment and prosecution of captured insurgents or to associate suspected perpetrators who are connected later with a hostile act.

(4) Support to force protection including identifying threat TTP and weapons’ capabilities that defeat friendly countermeasures, including jamming devices and armor.

(5) Identification of signature characteristics derived from threat weapon fabrication and employment methods that can aid in cuing collection assets.

  1. Tactical exploitation delivers preliminary assessments and information about the weapons employed and the people who employed them

Operational-level exploitation can be conducted by deployed labs and provides detailed forensic and technical analysis of captured materials. When combined with all-source intelligence reporting, it supports detailed analysis of threat networks to inform subsequent targeting activities. In an irregular warfare environment, where the mission and time permit, commanders should routinely employ forensics-trained collection capabilities (explosive ordnance disposal [EOD] unit, weapons intelligence team [WIT], etc.) in their overall ground operations to take advantage of battlefield opportunities.

(1) Tactical exploitation begins at the point of collection. The point of collection includes turnover of material from HN government or civilian personnel, material and information discovered during a maritime interception operation, cache discovery, raid, IED incident, post-blast site, etc.

(2) Operational-level exploitation employs technical and forensic examination techniques of collected data and material and is conducted by highly trained examiners in expeditionary or reachback exploitation facilities.

  1. Strategic exploitation is designed to inform theater- and national-level decision makers. A commander’s strategic exploitation assets may include forward deployed or reachback joint captured materiel exploitation centers and labs capable of conducting formally accredited and/or highly sophisticated exploitation techniques. These assets can respond to theater strategic intelligence requirements and, when very specialized capabilities are leveraged, provide support to national requirements.

Strategic exploitation is designed to support national strategy and policy development. Strategic requirements usually involve targeting of high-value or high-priority actors, force protection design improvement programs, and source interdiction programs designed to deny the adversary externally furnished resources.

  1. Exploitation activities are designed to provide a progressively detailed multidisciplinary analysis of materials recovered from the OE. From the initial tactical evaluation at the point of collection, to the operational forward deployed technical/forensic field laboratory and subsequent evaluation, the enterprise is designed to provide a timely, multidisciplinary analysis to support decision making at all echelons. Exploitation capabilities vary in scope and complexity, span peacetime to wartime activities, and can be applied during all military operations.
  2. Collection and Exploitation
  3. An integrated and synchronized effort to detect, collect, process, and analyze information, materials, or people and disseminate the resulting facts provides the JFC with information or actionable intelligence.

Collection also includes the documentation of contextual information and material observed at the incident site or objective. All the activities vital to collection and exploitation are relevant to identity activities as many of the operations and efforts are capable of providing identity attributes used for developing I2 products.

(1) Site Exploitation. The JFC may employ hasty or deliberate site exploitation during operations to recognize, collect, process, preserve, and analyze information, personnel, and/or material found during the conduct of operations. Based on the type of operation, commanders and staffs assess the probability that forces will encounter a site capable of yielding information or intelligence and plan for the integration of various capabilities to conduct site exploitation.

(2) Expeditionary Exploitation Capabilities. Operational-level expeditionary labs are the focal point for the theater’s exploitation and analysis activities that provide the commander with the time-sensitive information needed to shape the OE.

(a) Technical Exploitation. Technical exploitation includes electronic and mechanical examination and analysis of collected material. This process provides information regarding weapon design, material, and suitability of mechanical and electronic components of explosive devices, improvised weapons, and associated components.

  1. Electronic Exploitation. Electronic exploitation at the operational level is limited and may require strategic-level exploitation available at reachback labs or forward deployed labs.
  2. Mechanical Exploitation. Mechanical exploitation of material (mechanical components of conventional and improvised weapons and their associated platforms) focuses on devices incorporating manual mechanisms: combinations of physical parts that transmit forces, motion, or energy.

(b) Forensic Exploitation. Forensic exploitation applies scientific techniques to link people with locations, events, and material that aid the development of targeting, interrogation, and HN/PN prosecution support.

(c) DOMEX. DOMEX consists of three exploitation techniques: document exploitation, cellular exploitation, and media exploitation. Documents, cell phones, and media recovered during collection activities, when properly processed and exploited, provide valuable information, such as adversary plans and intentions, force locations, equipment capabilities, and logistical status. Exploitable materials include paper documents such as maps, sketches, letters, cell phones, smart phones, and digitally recorded media such as hard drives and thumb drives.

  1. Supporting the Intelligence Process
  2. Within their operational areas, commanders are concerned with identifying the members of and systematically targeting the threat network, addressing threats to force protection, denying the threat network access to resources, and supporting the rule of law. Information derived from exploitation can provide specific information and actionable intelligence to address these concerns. Exploitation reporting provides specific information to help answer the CCIRs. Exploitation analysis is also used to inform the intelligence process by identifying specific individuals, locations, and activities that are of interest to the commander
  3. Exploitation products may inform follow-on intelligence collection and analysis activities. Exploitation products can facilitate a more refined analysis of the threat network’s likely activities and, when conducted during shape and deter phases, typically enabled by HN, interagency and/or international partners, can help identify threats and likely countermeasures in advance of any combat operations.
  4. Exploitation Organization and Planning
  5. A wide variety of Service and national exploitation resources and capabilities are available to support forward deployed forces. These deployable resources are generally scalable and can make extensive use of reachback to provide analytical support. The JIPOE product will serve as a basis for determining the size and mix of capabilities that will be required to support initial operations.
  6. J-2E. During the planning process, the JFC should consider the need for exploitation support to help fulfill the requirements for information about the OE, identify potential threats to US forces, and understand the capabilities and capacity of the adversary network.

The J-2E (when organized) establishes policies and procedures for the coordination and synchronization of the exploitation of captured threat materials. The J-2E will:

(1) Evaluate and establish the commander’s collection and exploitation requirements for deployed laboratory systems or material evacuation procedures based on the mission, its object and duration, threat faced, military geographic factors, and authorities granted to collect and process captured material.

(2) Ensure broad discoverability, accessibility, and usability of exploitation information at all levels to support force protection, targeting, material sourcing, signature characterization of enemy activities, and the provision of materials collected, transported, and accounted for with the fidelity necessary to support prosecution of captured insurgents or terrorists.

(3) Prepare collection plans for a subordinate exploitation task force responsible for finding and recovering battlefield materials.

(4) Provide direction to forces to ensure that the initial site collection and exploitation activities are conducted to meet the commanders’ requirements and address critical information and intelligence gaps.

(5) Ensure that exploitation enablers are integrated and synchronized at all levels and their activities support collection on behalf of the commander’s priority intelligence requirements. Planning includes actions to:

(a) Identify units and responsibilities.

(b) Ensure exploitation requirements are included in the collection plan.

(c) Define priorities and standard operating procedures for materiel recovery and exploitation.

(d) Coordinate transportation for materiel.

(e) Establish technical intelligence points of contact at all levels to expedite dissemination.

(f) Identify required augmentation skill sets and additional enablers.

  1. Exploitation Task Force

(1) As an alternative to using the JFC’s staff to manage exploitation activities, the JFC can establish an exploitation task force, integrating tactical-level and operational-level organizations and streamlining communications under a single headquarters whose total focus is on the exploitation effort. The task force construct is useful when a large number of exploitation assets have been deployed to support large-scale, long-duration operations. The organization and employment of the task force will depend on the mission, the threat, and the available enabling forces.

The combination of collection assets with specialized exploitation enablers allows the task force to conduct focused threat network analysis and targeting, provide direct support packages of exploitation enablers to higher headquarters, and organize and conduct unit-level training programs.

(a) Site Exploitation Teams. These units are task-organized teams specifically detailed and trained at the tactical level. The mission of site exploitation teams is to conduct systematic discovery activities and search operations, and properly identify, document, and preserve the point of collection and its material.

(b) EOD Teams. EOD personnel have special training and equipment to render safe explosive ordnance and IEDs, make intelligence reports on such items or components, and supervise the safe removal thereof.

(c) WITs. WITs are task-organized teams, often with organic EOD support that exploit a site of intelligence value by collecting IED-related material, performing tactical questioning, collecting forensic materials, including latent fingerprints, preserving and documenting DOMEX, including cell phones and other electronic media, providing in-depth documentation of the site, including sketches and photographs, evaluating the effects of threat weapons systems, and preparing material for evacuation.

(d) CBRN Response Teams. When WMD or hazardous CBRN precursors may be present, CBRN response teams can be detailed to supervise the site exploitation. CBRN response team personnel are trained to properly recognize, preserve, neutralize, and collect hazardous CBRN or explosive materials.

(f) DOMEX. DOMEX support is scalable and ranges from a single liaison offer, utilizing reachback for full analysis, to a fully staffed joint document exploitation center for primary document exploitation.

APPENDIX F

THE CLANDESTINE CHARACTERISTICS OF THREAT NETWORKS 1. Introduction

  1. MaintainingregionalstabilitycontinuestoposeamajorchallengefortheUSandits PNs. The threat takes many forms from locally based to mutually supporting and regionally focused transnational criminal organizations, terrorist groups, and insurgencies that leverage global transportation and information networks to communicate and obtain and transfer resources (money, material, and personnel). In the long term, for the threat to win it must survive and to survive it must be organized and operate so that no one strike will cripple the organization. Today’s threat networks are characterized by flexible organizational structures, adaptable and dynamic operational capabilities, a highly nuanced understanding of the OE, and a clear vision of their long-term goals.
  2. While much has been made of the revolution brought about by technology and its impact on a threat network’s organization and operational methods, the impacts have been evolutionary rather than revolutionary. The threat network is well aware that information technology, while increasing the rate and volume of information exchange, has also increased the risk to clandestine operations due to the increase in electromagnetic and cyberspace signatures, which puts these types of communications at risk of detection by governments, like the USG, that can apply technological advantage to identify, monitor, track, and exploit these signatures.
  3. When it comes to designing a resilient and adaptable organizational structure, every successful threat network over time adopted the traditional clandestine cellular network architecture. This type of network architecture provides a means of survival in form through a cellular or compartmentalized structure; and in function through the use of clandestine arts or tradecraft to minimize the signature of the organization—all based on the logic that the primary concern is that the movement needs to survive to attain its political goals.
  4. When faced with a major threat or the loss of a key leader, clandestine cellular networks contain the damage and simply morph and adapt to new leaders, just as they morph and adapt to new terrain and OEs. In some cases the networks are degraded, in others they are strengthened, but in both cases, they continue to fight on, winning by not losing. It is this “logic” of clandestine cellular networks—winning by not losing—that ensures their survival.
  5. CTN activities that focus on high-value or highly connected individuals (organizational facilitators) may achieve short-term gains but the cellular nature of most threat networks allows them to quickly replace individual losses and contain the damage. Operations should isolate the threat network from the friendly or neutral populations, regularly deny them the resources required to operate, and eliminate leadership at all levels so friendly forces can deny them the freedom of movement and freedom of action the threat needs to survive.
  6. Principles of Clandestine Cellular Networks

The survival of clandestine portions of a threat network organization rests on six principles: compartmentalization, resilience, low signature, purposeful growth, operational risk, and organizational learning. These six principles can help friendly forces to analyze current network theories, doctrine, and clandestine adversaries to identify strengths and weaknesses.

  1. Compartmentalization comes both from form and function and protects the organization by reducing the number of individuals with direct knowledge of other members, plans, and operations. Compartmentalization provides the proverbial wall to counter friendly exploitation and intelligence-driven operations.
  2. Resilience comes from organizational form and functional compartmentalization and not only minimizes damage due to counter network strikes on the network, but also provides a functional method for reconnecting the network around individuals (nodes) that have been killed or captured.
  3. Low signature is a functional component based on the application of clandestine art or tradecraft that minimizes the signature of communications, movement, inter-network interaction, and operations of the network.
  4. Purposeful growth highlights the fact that these types of networks do not grow in accordance with modern information network theories, but grow with purpose or aim: to gain access to a target, sanctuary, population, intelligence, or resources. Purposeful growth primarily relies on clandestine means of recruiting new members based on the overall purpose of the network, branch, or cell.
  5. Operational risk balances the acceptable risk for conducting operations to gain or maintain influence, relevance, or reach to attain the political goals and long-term survival of the movement. Operations increase the observable signature of the organization, threatening its survival. Clandestine cellular networks of the underground develop overt fighting forces (rural and urban) to interact with the population, the government, the international community, and third-party countries conducting FID in support of the government forces. This interaction invariably leads to increased observable signature and counter-network operations against the network’s overt elements. However, as long as the clandestine core is protected, these overt elements are considered expendable and quickly replaced.
  6. Organizational learning is the fundamental need to learn and adapt the clandestine cellular network to the current situation, the threat environment, overall organizational goals, relationships with external support mechanisms, the changing TTP of the counter network forces, new technologies, and the physical dimension, human factors, and cyberspace.
  7. Organization of Clandestine Cellular Networks
  8. Clandestine elements of an insurgency use form—organization and structure—for compartmentalization, relying on the basic network building block, the compartmented cell, from which the term cellular is derived. The cell size can differ significantly from one to any number of members, as well as the type of interaction within the cell, depending on the cell’s function. There are generally three basic functions—operations, intelligence, and support. The cell members may not know each other, such as in an intelligence cell, with the cell leader being the only connection between the other members. In more active operational cells, such as a direct-action cell, all the members are connected, know each other, perhaps are friends or are related, and conduct military-style operations that require large amounts of communications. Two or more cells linked to a common leader are referred to as branches of a larger network. For example, operational cells may be supported by an intelligence cell or logistics cell. Building upon the branch is the network, which is made up of multiple compartmentalized branches, generally following a pattern of intelligence (and counterintelligence) branches, operational branches (direct action or urban guerrilla cells), support branches (logistics and other operational enablers like propaganda support), and overt political branches or shadow governments
  9. The key concept for organizational form is compartmentalization of the clandestine cellular network (i.e., each element is isolated or separated from the others). Structural compartmentalization is in two forms: the cut-out, which is a method ensuring that opponents are unable to directly link two individuals together, and through lack of knowledge; no personal information is known about other cell members, so capture of one does not put the others at risk. In any cell where the members must interact directly, such as in an operational or support cell, the entire cell may be detained, but if the structural compartmentalization is sound, then the counter-network forces will not be able to exploit the cell to target other cells, the leaders of the branch, or overall network.
  10. The basic model for a cellular clandestine network consists of the underground, the auxiliary, and the fighters. The underground and auxiliary are the primary components that utilize clandestine cellular networks; the fighters are the more visible overt action arm of the insurgency (Figure F-2). The underground and auxiliary cannot be easily replaced, while the fighters can suffer devastating defeats (Fallujah in 2006) without threatening the existence of the organization.
  11. The underground is responsible for the overall command, control, communications, information, subversion, intelligence, and covert direct action operations, such as terrorism, sabotage, and intimidation. The original members and core of the threat network generally operate as members of the underground. The underground cadres develop the organization, ideally building it from the start as a clandestine cellular network to ensure its secrecy, low- signature, and survivability. The underground members operate as the overarching leaders, leaders of the organization cells, training cadres, and/or subject matter experts for specialized skills, such as propaganda, bomb making, or communications.
  12. The auxiliary is the clandestine support personnel, directed by the underground, which provide logistics, operational support, and intelligence collection of the underground and the fighters. The auxiliary members use their normal daily routines to provide them cover for their activities in support of the threat, to include freedom of movement to transport materials and personnel, specialized skills (electricians, doctors, engineers, etc.), or specialized capabilities for operations. These individuals may hold jobs such as local security forces, doctors and nurses, shipping and transportation specialists, and businesspeople that provide them with a reason for security forces to allow them freedom of movement even in a crisis.
  13. The fighters are the most visible and the most easily replaced members of the threat network. While their size and armament will vary, they use a more traditional hierarchical organizational structure. The fighters are normally used for the high-risk missions where casualties are expected and can be recovered from in short order.
  14. The Elements of a Clandestine Cellular Network
  15. A growing insurgency/terrorist/criminal movement is a complex undertaking that must be carefully managed if its critical functions are to be performed successfully. Using the clandestine cellular model, the organization’s leader and staff will manage a number of subordinate functional networks
  16. These functional networks will be organized into small cells, usually arranged so that only the cell leader knows the next connection in the organization. As the organization grows, the number of required interactions will increase, but the number of actively participating members in those multicellular interactions will remain limited. Unfortunately, the individual’s increased activity also increases the risk of detection.
  17. Clandestine cellular networks are largely decentralized for execution at the tactical level, but maintain a traditional or decentralized hierarchical form above the tactical level. The core leadership may be an individual, with numerous deputies, which can limit the success of decapitation strikes. Alternatively, the core leadership could be in the form of a centralized group of core individuals, which may act as a centralized committee. The core could also be a coordinating committee of like-minded threat leaders who coordinate their efforts, actions, and effects for an overall goal, while still maintaining their own agendas.
  18. To maintain a low signature necessary for survival, network leaders give maximum latitude for tactical decision making to cell leaders. This allows them to maintain tactical agility and freedom of action based on local conditions. The key consideration of the underground leader, with regard to risk versus maintaining influence, is to expose only the periphery tactical elements to direct contact with the counter-network forces.

LASTING SUCCESS

For the counter-network operator, the goal is to conduct activities that are designed to break the compartmentalization and facilitate the need for direct communication with members of other cells in the same branch or members of other networks. By maintaining pressure and leveraging the effects of a multi-nodal attack, friendly forces could potentially cause a catastrophic “cascading failure” and the disruption, neutralization, or destruction of multiple cells, branches, or even the entire network. Defeat of a network’s overt force is only a setback. Lasting success can only come with securing the relevant population, isolating the network from external support, and identifying and neutralizing the hard-core members of the network.

Various Sources

  1. Even with rigorous compartmentalization and internal discipline, there are structural weaknesses that can be detected and exploited. These structural points of weaknesses include the interaction between the underground and the auxiliary and between the auxiliary and the fighters and the interaction with external networks (transnational criminal, terrorist, other insurgents) who may not have the same level of compartmentalization.
  2. Network Descriptors
  3. Networks and cells can be described as open or closed. Understanding whether a network or cell is open or closed helps the intelligence analysts and planners to determine the scale, vulnerability, and purpose behind the network or cell. An open network is one that is growing purposefully, recruiting members to gain strength, access to targeted areas or support populations, or to replace losses. Given proper compartmentalization, open networks provide an extra security buffer for the core movement leaders by adding layers to the organization between the core and the periphery cells. Since the periphery cells on the outer edge of the network have higher signatures than the core, they draw the friendly force’s attention and are more readily identified by the friendly force, protecting the core.
  4. Closed cells or networks have limited or no growth, having been hand selected or directed to limit growth in order to minimize signature, chances of compromise, and to focus on a specific mission. While open networks are focused on purposeful growth, the opposite is true of the closed networks that are purposefully compartmentalized to a certain size based on their operational purpose. This is especially pertinent for use as terrorist cells, made up of generally closed, non-growing networks of specially selected or close-knit individuals. Closed networks have an advantage in operational security since the membership is fixed and consists of trusted individuals. Compartmentalizing a closed network protects the network from infiltration, but once penetrated, it can be defeated in detail.

APPENDIX G

SOCIAL NETWORK ANALYSIS

  1. In military operations, maps have always played an important role as an invaluable tool to better understanding the OE. Understanding the physical terrain is often secondary to understanding the people. Identifying and understanding the human factors is critical. The ability to map, visualize, and measure threat, friendly, and neutral networks to identify key nodes enables commanders at the strategic, operational, and tactical levels to better optimize solutions and develop the plan.
  2. Planners should understand the environment made up of human relationships and connections established by cultural, tribal, religious, and familial demographics and affiliations.
  3. By using advanced analytical methodologies such as SNA, analysts can map out, visualize, and understand the human factors.
  4. Social Network Analysis
  5. Overview

(1) SNA is a method that provides the identification of key nodes in the network based on four types of centrality (i.e., degree, closeness, betweenness, and eigenvector) using network diagrams. SNA focuses on the relationships (links or ties) between people, groups, or organizations (called nodes or actors). SNA does this by providing tools and quantitative measures that help to map out, visualize, and understand networks and the relationships between people (the human factors) and how those networks and relationships may be influenced.

Network diagrams, a graphical depiction of network analysis, used within SNA are referred to as sociograms that depict the social community structure as a network with ties between nodes (see Figure G-1). Like physical terrain maps of the earth, sociograms can have differing levels of detail.

(2) SNA provides a deeper understanding of the visualization of people within social networks and assists in ranking potential ability to influence or be influenced by those social networks. SNA provides an understanding of the organizational dynamics of a social network, which can be used for detailed analysis of a network to determine options on how to best influence, coerce, support, attack, or exploit them. In particular, it allows planners to identify and portray the details of a network structure, illuminate key players, highlight cohesive cells or subgroups within the network and identify individuals or groups that can or cannot be influenced, supported, manipulated, or coerced.

(3) SNA helps organize the informality of illusive and evolving networks. SNA techniques highlight the structure of a previously unobserved association by focusing on the preexisting relationships and ties that bind groups together. By focusing on roles, organizational positions, and prominent or influential actors, planners can analyze the structure of an organization, how the group functions, how members are influenced, how power is exerted, and how resources are exchanged. These factors allow the joint force to plan and execute operations that will result in desired effects on the targeted network.

(4) The physical, cultural, and social aspects of human factors involve complicated dynamics among people and organizations. These dynamics cannot be fully understood using traditional link analysis alone. SNA is distinguished from traditional, variable-based analysis that typically focuses on a person’s attributes such as gender, race, age, height, income, and religious affiliation.

While personal attributes remain fairly constant, social groups, affiliations or relationships constantly evolve. For example, a person can be a storeowner (business social network), a father (kinship social network), a member of the local government (political social network), a member of a church (religious social network), and be part of the insurgent underground (resistance social network). A person’s position in each social network matters more than their unchanging personal attributes. Their behavior in each respective network changes according to their role, influence, and authority in the network.

(1) Metrics. Analysts draw on a number of metrics and methods to better understand human networks. Common SNA metrics are broadly categorized into three metric families: network topology, actor centrality, and brokers and bridges.

(a) Network Diagram. Network topology is used to measure the overall network structure, such as its size, shape, density, cohesion, and levels of centralization and hierarchy (see Figure G-2). These types of measures can provide an understanding of a network’s ability to remain resilient and perform tasks efficiently. Network topology provides the planner with an understanding of how the network is organized and structured.

(b) Centrality. Indicators of centrality identify the key nodes within a network diagram, which may include identifying influential person(s) in a social network. Identification of the centrality helps identify key nodes in the network and illuminate potential leaders and can lead analysts to potential brokers within the network (see Figure G- 3). Centrality also measures and ranks people and organizations within a network based on how central they are to that network.

  1. Degree Centrality. The degree centrality of a node is based purely on the number of nodes it is linked to and the strength of those nodes. It is measured by a simple count of the number of direct links one node has to other nodes within the network. While this number is meaningless on its own, higher levels of degree centrality compared to other nodes may indicate an individual with a higher degree of power or influence within the network.

Nodes with a low degree of centrality (few direct links) are sometimes described as peripheral nodes (e.g., nodes I and J in Figure G-3). Although they have relatively low centrality scores, peripheral nodes can nevertheless play significant roles as resource gatherers or sources of fresh information from outside the main network.

  1. Closeness Centrality. Closeness centrality is the length of a node’s shortest path to any other node in the network. It is measured by a simple count of the number of links or steps from a node to the farther node away from it in the network, with the lowest numbers indicating nodes with the highest levels of closeness centrality. Nodes with a high level of closeness centrality have the closest association with every other node in the network. A high level of closeness centrality affords a node the best ability to directly or indirectly access the largest amount of nodes with the shortest path.

Closeness is calculated by adding the number of hops between a node and all others in a network

  1. Betweenness Centrality. Betweenness centrality is present when a node serves as the only connection between small clusters (e.g., cliques, cells) or individual nodes and the larger network. It is not measured by counting like degree and closeness centrality are; it is either present or not present (i.e., yes or no). Having betweenness centrality allows a node to monitor and control the exchanges between the smaller and larger networks that they connect, essentially acting as a broker for information between sections of the network.
  2. Eigen vector centrality measures the degree to which a node is linked to centralized nodes and is often a measure of the influence of a node in a network. It assumes that the greater number or stronger ties to more central or influential nodes increases the importance of a node. It essentially determines the “prestige” of a node based on how many other important nodes it is linked to. A node with a high eigenvector centrality is more closely linked to critical hubs.

(c) Brokers and Bridges. Brokerage metrics use a combination of methods to identify either nodes (brokers) that occupy strategic positions within the network or the relationships (bridges) connecting disparate parts of the network (see Figure G-4). Brokers have the potential to function as intermediaries or liaisons in a network and can control the flow of information or resources. Nodes that lie on the periphery of a network (displaying low centrality scores) are often connected to other networks that have not been mapped. This helps the planner identify gaps in their analysis and areas that still need mapping to gain a full understanding of the OE. These outer nodes provide an opportunity to gather fresh information not currently available.

  1. Density

Network density examines how well connected a network is by comparing the number of links present to the total number of links possible, which provides an understanding of how sparse or connected the network is. Network density can indicate many things. A dense network may have more influence than a sparse network. A highly interconnected network has fewer individual member constraints, may be less likely to rely on others as information brokers, be in a better position to participate in activities, or be closer to leadership, and therefore able to exert more influence upon them.

  1. Centralization. Centralization helps provide insights on whether the network is centralized around a few key personnel/organizations or decentralized among many cells or subgroups. A network centralized around one key person may further allow planners to focus in on these key personnel to influence the entire network.
  2. Density and centralization can inform whether an adversary force has a centralized hierarchy or command structure, if they are operating under a core C2 network with multiple, relatively autonomous hubs, or if they are a group of ad hoc decentralized resistance elements with very little interconnectedness or cohesive C2. Centralization metrics can also identify the most central people or organizations with the resistance.

Although hierarchical charts are helpful, they do not convey the underlying powerbrokers and key players that are influential with a social network and can often miss identifying the brokers that control the flow of information or resources throughout the network.

  1. Interrelationship of Networks

The JFC should identify the key stakeholders, key players, and power brokers in a potential operational area.

  1. People generally identify themselves as members of one or more cohesive networks. Networks may form due to common associations between individuals that may include tribes, sub-tribes, clans, family, religious affiliations, clubs, political organizations, and professional or hobby associations. SNA helps examine the individual networks that exist within the population that are critical to understanding the human dynamics in the OE based upon known relationships.
  2. Various networks within the OE are interrelated due to an individual’s association with multiple networks. SNA provides the staff with understanding of nodes within a single network, but can be expanded to conduct analysis on interrelated networks. This may provide the joint staff with an indication of the potential association, level of connectivity and potential influence of a single node to one more interrelated network. This aspect is essential for CTN, since a threat network’s relationship with other networks must be considered by the joint staff during planning and targeting.
  3. Other Considerations
  4. Collection. Two types of data need to be collected to conduct SNA: relational data (such as family/kinship ties, business ties, trust ties, financial ties, communication ties, grievance ties, political ties, etc.) and attribute data that captures important individual characteristics (tribe affiliations, job title, address, leadership positions, etc.). Collecting, updating, and verifying this information should be coordinated across the whole of USG.

(1) Ties (or links) are the relationship between actors (nodes) (see Figure G-5). By focusing on the preexisting relationships and ties that bind a group together, SNA will help provide an understanding of the structure of the network and help identify the unobserved associations of the actors within that network. To draw an accurate picture of a network, planners need to identify ties among its members. Strong bonds formed over time by connections like family, friendship, or organizational associations characterize these ties.

(2) Capturing the relational data of social ties between people and organizations requires collection, recording, and visualization. The joint force must collect specific types of data in a structured format with standardized data definitions across the force in order to visualize the human factors in systematic sociograms.

  1. Analysis

(1) Sociograms identify influential people and organizations as well as information gaps in order to prioritize collection efforts. The social structure depicted in a sociogram implies an inherent flow of information and resources through a network. Roles and positions identify prominent or influential individuals, structures of organizations, and how the networks function. Sociograms can model the human dynamics between participants in a network, highlight how to influence the network, identify who exhibits power within the network, and illustrate resource exchanges within the network. Sociograms can also provide a description and picture of the regime networks, or neutral entities, and uncover how the population is segmented.

(2) Sociograms are representations of the actual network and may not provide a complete or true depiction of the network. This could be the result of incomplete information or including or not including appropriate ties or actors. In addition, networks are constantly changing and a sociogram is only as good as the last time it was updated.

  1. Challenges. Collecting human factors data to support SNA requires a concerted effort over an extended period. Data can derive from traditional intelligence gathering capabilities, historical data, open-source information, exploiting social media, known relationships, and direct observation. This human factor data should be codified into a standardized data coding process defined by a standardized reference. Entering this human factor data is a process of identifying, extracting, and categorizing raw data to facilitate analysis. For analysts to ensure they are analyzing the sociocultural relational data collected in a standardized way, the JFC can produce a reference that provides standardized definitions of relational terms. Standardization will ensure that when analysts or planners exchange analytical products or data their analysis has the same meaning to all parties involved. This is needed to avoid confusion or misrepresentation in the data analysis. Standardized data definitions ensure consistency at all levels; facilitate data and analysis product transfer among differing organizations; and allow multiple organizations to produce interoperable products concurrently.

APPENDIX H

REFERENCES

The development of JP 3-25 is based on the following primary references:

  1. General
  2. Title 10, United States Code.
    b. Strategy to Combat Transnational Organized Crime.
    c. Executive Order 12333, United States Intelligence Activities.
  3. Department of Defense Publications
  4. Department of Defense Counternarcotics and Global Threats Strategy.
    b. Department of Defense Directive (DODD) 2000.19E, Joint Improvised Explosive

Device Defeat Organization.

  1. DODD 3300.03, DOD Document and Media Exploitation (DOMEX).
  1. DODD 5205.14, DOD Counter Threat Finance (CTF) Policy.
  2. DODD 5205.15E, DOD Forensic Enterprise (DFE).
  1. DODD 5240.01, DOD Intelligence Activities.
  1. DODD 8521.01E, Department of Defense Biometrics.
  2. Department of Defense Instruction (DODI) O-3300.04, Defense Biometric Enabled

Intelligence (BEI) and Forensic Enabled Intelligence (FEI).

  1. DODI5200.08, Security of DOD Installations and Resources and the DOD Physical Security Review Board (PSRB).
  2. Chairman of the Joint Chiefs of Staff Publications
  3. JP 2-01.3, Joint Intelligence Preparation of the Operational Environment. b. JP 3-05, Special Operations.
    c. JP 3-07.2, Antiterrorism.
    d. JP 3-07.3, Peace Operations.
  4. JP 3-07.4, Counterdrug Operations.
    f. JP 3-08, Interorganizational Cooperation.
  5. JP 3-13, Information Operations.
    h. JP 3-13.2, Military Information Support Operations.
    i. JP 3-15.1, Counter-Improvised Explosive Device Operations. j. JP 3-16, Multinational Operations.
    k. JP 3-20, Security Cooperation.
    l. JP 3-22, Foreign Internal Defense.
    m. JP 3-24, Counterinsurgency.
    n. JP 3-26, Counterterrorism.
    o. JP 3-40, Countering Weapons of Mass Destruction.
    p. JP 3-57, Civil-Military Operations.
    q. JP 3-60, Joint Targeting.
    r. JP 5-0, Joint Planning.
    s. Joint Doctrine Note 1-16, Identity Activities.
  6. Multi-Service Publication

ATP 5-0.3/MCRP 5-1C/NTTP 5-01.3/AFTTP 3-2.87, Multi-Service Tactics, Techniques, and Procedures for Operation Assessment.

  1. Other Publications
  2. The Haqqani Network: Pursuing Feuds Under the Guise of Jihad? CTX Journal, Vol. 3, No. 4, November 2013, Major Lars W. Lilleby, Norwegian Army.
  3. Foreign Disaster Response, Military Review, November-December 2011.
  4. US Military Response to the 2010 Haiti Earthquake, RAND Arroyo Center, 2013.
  5. DOD Support to Foreign Disaster Relief, July 13, 2011.
  6. United Nations Stabilization Mission in Haiti website.
  7. Kirk Meyer, Former Director of the Afghan Threat Finance Cell—CTX Journal, Vol. 4, No. 3, August 2014.
  8. Networks and Netwars: The Future of Terror[ism], Crime, and Militancy, Edited by John Arquilla, David Ronfeldt.
  9. General Martin Dempsey, Chairman of the Joint Chiefs of Staff, Foreign Policy,25 July 2014, The Bend of Power.
  10. Alda,E.,andSala,J.L.LinksBetweenTerrorism,OrganizedCrimeandCrime:The Case of the Sahel Region. Stability: International Journal of Security and Development, Vol. 3, No. 1, Article 27, pp. 1-9.
  11. International Maritime Bureau Piracy (Piracy Reporting Center).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Quotes from Gringo by Chesa Boudin

Quotes from Gringo by Chesa Boudin

(53)

My mother Kathy’s father, Leonard, was a founding partner of a law firm that defended the Allende administration after it nationalized United States-owned copper mines. The litigation was pending when Pinochet’s coup toppled the democratic government. My grandfather’s firm acquired Chile as a client largely on the strength of its long-standing relationship with the Cuban government. Over a mojito in a hotel lobby in Old Havana, long after my grandfather’s death, I learned about his work in Cuba from Luis Martinez, the former head of the Cuban national airline, Cubana de Aviacion, and a high ranking official in the Ministry of Transportation. We sat sipping the sweet minty drinks that reportedly had Hemmingway hooked from his first taste…

Luis had gray hair but was fit and energetic. He had great respect for my grandfather, he told me. Back when he was running the airline, my grandfather had saved on of their planes. It had flowin into New York to bring Cuban diplomats to a United Nations meeting, but the United States and Cuba were in the midst of diplomatic and legal feuds…

He explained that when Cuba began nationalizing large landholdings and factories, many of which had United States citizens for owners, there was an immense amount of legal work to sort out the mess. Grandpa Leonard’s firm handled much of it.

(55)

Luis gave me a parting gift that he had received from my grandfather forty years earlier: a slightly worn first edition copy of a book called The Theoretical System of Karl Marx, by Louis Boudin, my great-great uncle.

Louis and Leonard had been lawyers, fighting their battles in defense of civil liberties, labor organizations, and Third World governments in the courtroom, but my partners took to the streets when the Allende government fell. In the aftermath of the coup there were protests in solidarity with Chilean democracy in countries around the world, including the United States…

The Weather Underground also protested targeting ITT’s (International Telephone and Telegraph) Latin American division corporate office.

(57)

Allies inside el imperio have an essential role to play in any process of global change and should not be scorned.

(67)

Second, I started thinking about my first year in college when, in the wake of the Battle in Seattle, the anti-World Trade Organization protests of November 30, 1999, I got involved in the anti-globalization movement. I worked enthusiastically to recruit other students on my campus for a protest in Washington, D.C., against the IMF, the World Bank, and other international financial institutions. I wanted to take action in solidarity with the global poor and marginalized, those sectors of society that Nobel Prize winning economist Joseph Stiglitz would later call “discontents” in his bestselling book Globalization and Its Discontents.

(106)

I had stepped off a bus in the Caracas terminal for the first time on a rainy Tuesday afternoon in November 2004. My expectations of the city I had arrived in came from Professor Vitales, back in Chile.

(109)

At that time I knew only a couple of people in Caracas. One was Marta Harnecker… The other was Marta’s husband, Michael Lebowitz. Michael was a Marxist economist professor from Canada whose unkempt hair and puffy white beard framing a full face might have led the casual observer to confuse him with the photo of Marx on the cover of his award-winning book, Beyond Capital.

(110)

Marta asked me if I would be willing to translate into Spanish a working paper Michael has written that she wanted to be able to share with friends in the Chavez administration. It was the first of many occasions when I realized that when Marta asks for something it is very hard to say no.

(111)

Marta’s office was in the heart of the old palace. The large room had a high painted ceiling and tall wooden doors that led out onto an open-air courtyard garden with a small fountain in the middle. The suite of offices on the other side of the fountain belonged to the chief of staff, a position that changed frequently under Chavez.

(112)

She introduced me to the other people scurrying around the office as the son of political prisoners in the United States.

(113)

It was 10am before the meeting started at the round wooden table in Mara’s office. From the warm greetings that were exchanged it was obviously a meeting of friends. Still, I couldn’t help but feel nervous. In addition to Michael and Marta, the meeting included Haiman el Troudi, a presidential adviser at the time but soon to be chief of staff, and several other senior people in the government

* Haiman served for roughly a year as chief of staff before leaving the palace. Marta, Michael, and several other colleagues of their left the palace with Haiman and founded a policy think tank called Centro Internacional Miranda. As of December 2008, Marta and Michael were both in senior positions at the CIM and Haiman had recently been named minister of planning.

(116)

It was hard for me to believe that after just three full days in the country I had already participated in a meeting in the heart of the presidential palace.

In Chavez’s Venezuela it couldn’t be easy for estadosunidenses to gain political access of the sort I had stumbled into. I had found one of the few places on the planet where having parents in prison in the United States for politically motivated crimes actually opened doors rather than closed them.

(118)

If the coup that briefly toppled Chavez in 2002 had occurred in the 1960s or 1970s, while my parents were young activists, they probably would have protested the States Department or a big oil company. But to my knowledge, none of my forebears had ever had this kind of a window into radical government.

(119)

A month after my arrival in Venezuela, Caracas hosted an international conference called Artists and Intellectuals in Defense of Humanity. Nobel laureates, activists, painters, writers, dancers, and organizers from across the globe were invited to participate. Among them was my mom, Bernadine. My time in Venezuela had built my confidence as a translator and I was hired as one of the dozens of interpreters at the conference. It was good to have a break from the office routine and a paid job for a change. And I got to hand out with Mom when my working grouop wasn’t in session.

It was at one of the plenary events for the conference that I first saw Chavez speak. The Teresa Carreno Theatre in central Caracas was packed with thousands of red-shirt-wearing chavistas – read being the color of Chavez’s political party – by the time my mom and I made it through the security lines into the massive auditorium.

Adolfo Perez Esquivel… spoke without notes and with slow, carefully annunciated words. “In this hour of particular danger, we renew our conviction that another world is not only possible but also necessary. We commit to struggle for that other world with more solidarity, unity, and determination; in defense of humanity we reaffirm our certainty that the people will have the last word.”

(121)

[Chavez] thanked Perez Esquivel for his introduction and then mentioned a few prominent visitors he knew were in the crowd: Daniel Ortega, Ricardo Alarcon, Tariq Ali, Ignacio Ramonet, Danny Glover, Cynthia McKinney, representatives of the national labor union (UNT), that national indigenous federation of Venezuela, and the Bolivarian farmers.

Chavez began by talking about the significance of the conference, the need to build networks of intellectuals and artists fighting for humanity. He criticized the intellectuals who had announced “the end of history” and the triumph of neoliberalism.

(122)

His [Chavez’s] speaking style was erratic – wandering, switching topics, going off on tangents – yet captivating. He didn’t use notes or a teleprompter and relied on sheer charisma to carry the crowd with him on a journey that stretched around the planet, and through political theory (he cited Marti and Trotsky).

Being at such events always had a profound effect on me. Words on a page cannot capture the contagious energy they inspire. Those in attendance bear the hours of waiting admirably, celebrating their optimism, their newfound connections to state power.

(123)

Four months after I began working in Miraflores, I switched to a new office, that of Presidential International Relations…. I was now charged with following media reports on United States-Venezuela relations and Venezuela’s role in the international arena generally.

When Marta or Michael wanted me, I took time off from my new office to work with them. Marta Coordinated the organization of the Third Annual International Conference in Solidarity with the Bolivarian Revolution.

(124)

Chavez has been calling for a new socialist model but no one in the government had explained concretely what exactly this new economic system would look like.

In May 2005, my parents, Bill and Bernadine, were invited down to Venezuela and I got the change to hit the streets.

Bill and Bernadine gave talks to audiences of as many as two hundred people in Caracas and the interior at universities and cultural centers. The groups they were spoke to were primed with screenings of the Academy Award-nominated documentary The Weather Underground. I interpreted for them throughout the trip, including the public appearances.

(125)

Their talks included anecdotes about successful community-based struggles for equal education and justice in poor Chicago neighborhoods. The lessons they had learned from 1960s era freedom schools and protest movements were employed to inform today’s struggles, a focus on the present and the future rather than the starry-eyed reminiscing about the past.

We were astonished at the enthusiasm of the crowds’ reaction, especially in the interior.

(126)

People with a highly developed political analysis saw, in the film and in our presence, hopeful examples of internal resistance to imperialism norteamericano. Others simply seemed happy to have people from El Norte in their midst affirming their attempts to build a new, different society.

(140)

I had what Venezuelan’s call a chapa, a sort of Get Out of Jail Free Card, an ID or document that opens doors and solves problems. This took the form of a signed and sealed letter from the office of Presidential International Relations explaining the political significance of the film we were making. It worked its magic and in a matter of moments we were through the last round of security.

(143)

I had met at least half a dozen Chileans, like Pablo and Liza, who had come to Venezuela to work in solidarity with the Bolivarian Revolution; no doubt they had hoped that it would prove more successful than their own country’s short lived democratic revolution.

(144)

We hung out in the politically progressive expat scene in Caracas, which some Venezuelans view as an expression of international solidarity and others as political tourism. Venezuelans that dislike the Chavez government often make snide comments about gringos who were red T-shirts, or dress as hippies, suggesting that it would be better if they spent their time and money on Venezuela’s beaches than on playing games in the political system, and that they would never tolerate a government like that of Chavez in their own countries.

(149)

Two months after my stint as the fixer for the news crew in Caracas, I headed off to Medellin, Colombia, to meet my mom, Bernadine… Thought it was my first time in the city, my mom had been there on several occasions previously. All her trips to Colombia, like this one, had been on human rights missions at the invitation of a Colombian colleague, a Franciscan nun named Sister Carolina Pardo.

Sister Carolina speaks nearly perfect English, thanks, in part, to time she spent in a sort of exile at a master’s program in clinical social work at Loyola University in Chicago from 2004 – 2006 when the threats against her in Colombia were at a peak. It was during that period she and my mom developed a close friendship and working relationship.

(152)

We were there as part of a one hundred-strong delegation of international human rights activists and journalists from fifteen different countries who wanted to learn about and support the local communities.

The plan was to visit several different communities that had been displaced by government or paramilitary violence.

(161)

We began a ceremony in which displaced people from Choco and representatives of displaced communities from other parts of Colombia, who had come along with the delegation, shared their stories about disappearances and murders of loved ones: husbands, brothers and fathers. The the internationals in the group began. An Argentine mother of the Plaza de Mayo lit a candle for her daughter who had disappeared more than thirty years ago in that country’s Dirty War against the left. A Chilean ex-political prisoner under Pinochet lit a candle for his companions who never made it out of the torture camps.  A Brazilian woman representing the MST, the Landless Workers Movement, lit a candle for peasants recently killed in Brazil while fighting for a small plot of land to plant.

Though I tried to concentrate on interpreting for my mom, there were several moments in the proceedings where I could not stop myself from choking up. I couldn’t help but think about my own biological parent’s decades in prison, my father’s continuing incarceration, and the three men who were killed during the crime my parents participated in. I considered lighting a candle and sharing their plight with the group, but then decide against it. Perhaps it was too hard to break out of my role as interpreter and take on the role of the participant, or maybe I didn’t feel up to the task of trying to explain my parents’ use of violence to these people who themselves had suffered so much. Certainly, I was self-conscious of our position as the only two representatives from the United States, a county that, directly or indirectly, had fueled the violence in all of the Latin American countries represented in our solemn gathering.

(163)

Our role there made me think of a Zapatista saying I had learned while exploring Chiapas years earlier: “If you have come to help us, please go home; if you have come to join us, welcome. Pick up a shovel or a machete and get busy.”

(194)

The reemergence of the Latin American left today is unlike previous reformist movements in the region that derived political power from vertical relationships to unions, peasant associations and party hierarchies. Today’s progressive political movements in the region tend to hae more horizontal power structures and to rely on a diverse array of social movements. These kinds of groups make up the radical left in the United States today too, but with seemingly no impact on electoral results.

(199)

They had generously invited me into their hellish world, deep inside the earth. All I could offer them in exchange was a cheap present of a few sticks of dynamite. But a small part of me also felt somehow redeemed: as a young backpackers and motorcyclist, Che Guevara has been profoundly affected by seeing the horrible conditions in the mines in Bolivia. .. Here was proof of what they said, a justification of sorts for their political perspectives.

 

Eduardo Galeano, a Uruguayan writer my parents encouraged me to read before I was even interested in Latin America, describes Potosi as a mine that “eats men.”

(206)

“We have a saying,” Jose answered. “singre de minero, semilla de guerrillero.” The rhyme lost is lost in translation but the meaning is the same: the miner’s blood is the seed of the guerrilla.

“Did some of you go on to form underground guerrilla organizations?”

Jose laughed a little, and told me gently that I was missing the point. He explained that after 1985 tens of thousands of Bolivian miners had no choice but to migrate away from the mines in search of a new life for themselves and their families. A few went to other countries in search of work, but more went to the campo and became farmers, especially of coca in the Chapare region, or moved into cities, especially in the rapidly growing El Alto.

(215)

Venezuela’s political experiment is still a democratic and courageous effort to invent an alternative model, based on the insistence that another way, another world is possible.

Sometimes cynicism and pessimism descend and I resign myself to the idea that these Latin American political experiments are doomed to failure. But I hope I’m wrong. Certainly never, not once, have I thought they shouldn’t be tried. Humanity can benefit from political diversity the way that it does from linguistic, cultural, racial, or religious diversity. The political status quo is antiquated and in need of urgent, radical change. Democratic political experiments like those in Venezuela, regardless of their long-term viability, inspire hope and political creativity across the globe.

(216)

The more I spoke and comprehended, the more I was able to understand what was happening in the region around me, to build friendships through my wanderings.

As I came of age, changing in myself, I found a region that was also in the midst of the most profound transformation. I came to see Latin America as a prism through which I could better understand my own roots in the radical left in the United States.

(221)

Whether at home in the United States, or abroad on the road, I will have to keep living in at least two worlds.

***

There is also video available on CSPAN  where Chesa Boudin talked about his life as a young adult in Venezuela when Hugo Chavez came to power. It’s interesting to note that in the question and answer section that he declares that he is still in contact with several Colombian activists at the time of this video.

Quotes from Alicia Garza’s The Purpose of Power: How We Come Together When We Fall Apart

The Purpose of Power: How We Come Together When We Fall Apart by Alicia Garza provides an autobiographical accounting of one the founders of Black Lives Matter. The following are Excerpts from Alicia Garza’s book The Purpose of Power, with a thematic description of the text above it.

On Black Lives Matter and Movement Building

“Even though I’d been an organizer for more than ten years when Black Lives Matter began, it was the first time I’d been part of something that garnered so much attention. Being catapulted from a local organizer who worked in national coalitions to the international spotlight was unexpected.”

“I’ve been asked many times over the years what an ordinary person can do to build a movement from a hashtag. Though I know the question generally comes from an earnest place, I still cringe every time I am asked it. You cannot start a movement from a hashtag. Hashtags do not start movements—people do. Movements do not have official moments when they start and end, and there is never just one person who initiates them. Movements are much more like waves than they are like light switches. Waves ebb and flow, but they are perpetual, their starting point unknown, their ending point undetermined, their direction dependent upon the conditions that surround them and the barriers that obstruct them. We inherit movements. We recommit to them over and over again even when they break our hearts, because they are essential to our survival.”

“You cannot start a movement from a hashtag. Only organizing sustains movements, and anyone who cannot tell you a story of the organizing that led to a movement is not an organizer and likely didn’t have much to do with the project in the first place.

Movements are the story of how we come together when we’ve come apart.”

On Activists and Influencers 

“The emergence of the activist-as-celebrity trend matters. It matters for how we understand how change happens (protest and add water), it matters for how we understand what we’re fighting for (do people become activists to create personal “influencer” platforms or because they are committed to change?), and it matters for how we build the world we want. If movements can be started from hashtags, we need to understand what’s underneath those hashtags and the platforms they appear on: corporate power that is quickly coming together to reshape government and civil society, democracy and the economy.”

On Revolutionary Theory

“FRANTZ FANON SAID THAT “EACH generation must, out of relative obscurity, discover its mission, fulfill it, or betray it.” This is the story of movements: Each generation has a mission that has been handed to it by those who came before. It is up to us to determine whether we will accept that mission and work to accomplish it, or turn away and fail to achieve it.

There are few better ways to describe our current reality. Generations of conflict at home and abroad have shaped the environment we live in now. It is up to us to decide what we will do about how our environment has been shaped and how we have been shaped along with it. How do we know what our mission is, what our role is, and what achieving the mission looks like, feels like? Where do we find the courage to take up that which has been handed to us by those who themselves determined that the status quo is not sufficient? How do we transform ourselves and one another into the fighters we need to be to win and keep winning?”

“Before we can know where we’re going—which is the first question for anything that calls itself a movement—we need to know where we are, who we are, where we came from, and what we care most about in the here and now. That’s where the potential for every movement begins.

We are all shaped by the political, social, and economic contexts of our time. ”

On Revolutionary Practice

“Our wildly varying perspectives are not just a matter of aesthetic or philosophical or technological concern. They also influence our understanding of how change happens, for whom change is needed, acceptable methods of making change, and what kind of change is possible. My time, place, and conditions powerfully shaped how I see the world and how I’ve come to think about change.”

The Interpretation of History that Shaped her Worldview

“By the time I came into the world, the revolution that many had believed was right around the corner had disintegrated. Communism was essentially defeated in the Soviet Union. The United States, and Black people within it, began a period of economic decline and stagnation—briefly interrupted by catastrophic bubbles—that Black communities have never recovered from. ”

“The gulf between the wealthy and poor and working-class communities began to widen. And a massive backlash against the accomplishments won during the 1960s and 1970s saw newly gained rights undermined and unenforced.

But just like in any period of lull, even in the quiet, the seeds of the next revolution were being sown.

Many believe that movements come out of thin air. We’re told so many stories about movements that obscure how they come to be, what they’re fighting for, and how they achieve success. As a result, some of us may think that movements fall from the sky..”

“Those stories are not only untrue, they’re also dangerous. Movements don’t come out of thin air.”

Political Ideology and Strategic Frameworks

“In the United States, “right wing” usually refers to people who are economically, socially, or politically conservative. What does it mean to be “conservative”? I’m using “conservative” to describe people who believe that hierarchy or inequality is a result of a natural social order in which competition is not only inevitable but desirable, and the resulting inequality is just and reflects the natural order. Typically, but not always, the natural order is held to have been determined and defined by God or some form of social Darwinism. ”

“One component of the successful religious-right strategy included building out an infrastructure of activist organizations that could reach even more people and influence the full range of American politics. ”

“The religious right developed the wide, more geographically distributed base of voters that the neoconservatives and the new right needed to complete their takeover of the Republican Party. These factions had many differences in approach, long-term objectives, overall vision, values, and ideology. The corporate Republicans wanted deregulation, union busting, and a robust military-industrial complex. The neoconservatives wanted to fight communism and establish global American military hegemony and American control over the world’s resources. The social conservatives wanted to roll back the gains of civil rights movements and establish a religious basis and logic for American government. And yet, even amid their differences, where they are powerful is where their interests align; they are able to work through those differences in order to achieve a common goal.”

[…] 

“Under Reaganism, personal responsibility became the watchword. If you didn’t succeed, it was because you didn’t want to succeed. If you were poor, it was because of your own choices. And if you were Black, you were exaggerating just how bad things had become.

Reagan declared a War on Drugs in America the year after I was born. His landmark legislation, the Anti-Drug Abuse Act of 1986, enacted mandatory minimum sentences for drugs. This single piece of legislation was responsible for quadrupling the prison population after 1980 and changing the demographics in prisons and jails, where my mother worked as a guard, from proportionally white to disproportionately Black and Latino. ”

On Regan 

“Reagan stoked public fears about “crack babies” and “crack whores.” The Reagan administration was so successful at this manipulation that, in 1986, crack was named the Issue of the Year by Time magazine.”

“Reagan led the popular resistance to the movements fighting against racism and poverty in the Global South that characterized the 1960s and 1970s. Significantly, he alluded to protest movements in the United States being used as tools of violence by the USSR, playing on widespread fears about a communist takeover of the United States and abroad. He also used fears of communism to authorize an invasion of Grenada, a then-socialist Caribbean country, to increase United States morale after a devastating defeat in Vietnam a few years prior, and to increase support for pro-U.S. interventions in El Salvador, Nicaragua, and Guatemala. Reagan also supported the apartheid regime in South Africa.”

“The War on Drugs had begun to morph into the War on Gangs. Economic policy shifts meant that white families moved out of the cities and into the suburbs. Television news programs and newspapers were swelling with stories of crime and poverty in the inner cities. Since there was little discussion of the policies that had created such conditions, the popular narrative of the conservative movement within both parties blamed Black communities for the conditions we were trying to survive. More and more pieces of legislation, written under the blueprint of the conservative movement but extending across political party lines, targeted Black communities with increased surveillance and enforcement, along with harsher penalties. None of these legislative accomplishments included actually fighting the problems, because this movement had created those problems in the first place.”

On San Francisco Activism

“I volunteered at an organization to end sexual violence called San Francisco Women Against Rape (SFWAR)”

“My volunteer duties at SFWAR felt more aligned with my emerging sense of politics, but they also helped shape my understanding of my own identity: Most of the staff was queer and of color. Being in that environment helped me explore my own sexuality, as I found myself attracted to and attractive to dykes and butches and trans people. During our training as volunteers, we learned about various systems of oppression—much as I had in college—but this learning was not academic; it wasn’t detached from our own experiences. We were seeing how those systems functioned on the ground, in people’s real lives—in our lives.

SFWAR was going through a transition: It was trying to move from a one-way organization that simply provided services in response to a pressing need to one that had a two-way relationship with the people who received them—both providing services and learning from, adapting to, and integrating the recipients into the process. This shift brought with it some upheaval, internally and externally. There wasn’t a clear agreement internally about which direction to head in. Having taken on a more explicitly political stance, SFWAR was being attacked from the outside—and the work itself was hard enough without the added stress of death threats coming through our switchboard or funders threatening to withdraw.”

[…]

“My time at SFWAR was coming to a close, and one day I received a notice on a listserv I belonged to advertising a training program for developing organizers. They were looking for young people, ages eighteen to thirty, to apply to participate in an eight-week program that promised “political education trainings” and “organizing intensives.” Each person selected would be placed in a community-based organization for training, and many organizations were inclined to hire the interns if their time during the sum”

On Community Organizing

“Community organizing is often romanticized, but the actual work is about tenacity, perseverance, and commitment. It’s not the same as being a pundit, declaring your opinions and commentary about the world’s events on your social media platforms. Community organizing is the messy work of bringing people together, from different backgrounds and experiences, to change the conditions they are living in. It is the work of building relationships among people who may believe they have nothing in common so that together they can achieve a common goal. That means that as an organizer, you help different parts of the community learn about one another’s histories and embrace one another’s humanity as an incentive to fight together. An organizer challenges their own faults and deficiencies while encouraging others to challenge theirs. An organizer works well in groups and alone. Organizers are engaged in solving the ongoing puzzle of how to build enough power to change the conditions that keep people in misery.”

Working with POWER

“In 2005, I joined a small grassroots organization called People Organized to Win Employment Rights (POWER) to help start a new organizing project focused on improving the lives of Black residents in the largest remaining Black community in San Francisco.

I’d been following POWER for a long time. It was founded in 1997 with the mission to “end poverty and oppression once and for all.” POWER was best known for its work to raise the minimum wage in San Francisco to what was, at the time, the highest in the country, and for its resistance to so-called welfare reform, which it dubbed “welfare deform.” POWER was unique among grassroots organizations in San Francisco because of its explicit focus on Black communities. That was one of the aspects that attracted me to the organization’s work. POWER was everything I was looking for in an organization at that point in my life—a place where I could learn, a place where I would be trained in the craft of organizing and in the science of politics, and a place where I didn’t have to leave my beliefs, my values, and my politics at the door each day when I went I went to work.

Joining POWER would change how I thought about organizing forever.”

“We had a robust network of volunteers who would be willing to help gather the signatures needed. We’d begun working closely with the Nation of Islam, environmental justice organizations like Greenaction for Health and Environmental Justice and the Sierra Club, and other faith-based organizers who would lend their support. After talking with our coalition partners, as well as the membership that POWER had built in the neighborhood, and debating the best approach, we decided to give it a shot.”

“Shortly after we qualified for the ballot measure, our coalition started hearing “may be safe to say that Black communities want to see a better world for themselves and their families, it isn’t accurate to assume that Black people believe that all Black people will make it there or deserve to. While some of us deeply understand the ways in which systems operate to determine our life chances, others believe deeply in a narrative that says we are responsible for our own suffering—because of the choices we make or the opportunities we fail to seize. Some Black people think we are our own worst enemy.

 

On Working as an Organizer 

“As organizers, our goal was to get those in the 99 percent to put the blame where it actually belonged—with the people and institutions that profited from our misery. And so, “unite to fight” is a call to bring those of us stratified and segregated by race, class, gender, sexuality, ability and body, country of origin, and the like together to fight back against truly oppressive power and to resist attempts to drive wedges between us. More than a slogan, “the 99 percent” asserts that we are more similar than we are different and that unity among people affected by a predatory economy and a faulty democracy will help us to build an unstoppable social movement.

Many of the organizations that I helped to build between 2003 and today upheld the principle of “unite to fight” before “the 99 percent” was a popular phrase. This orientation is not just important for the potential of a new America; it is important for the potential of a globally interdependent world.”

On Political Strategy

“When I began working at POWER in 2005, our organization had an explicit strategy that involved building a base of African Americans and immigrant Latinos. In fact, our model of multiracial organizing was one that other organizations looked to for inspiration on how to build multiracial organizations. The National Domestic Workers Alliance, where I currently work, is a multiracial organization comprising Pacific Islanders, Black immigrants, U.S.-born Black people, South Asians and others from the Asian diaspora, immigrant Latinos, Chicanas, and working-class white people. My organizing practice and my life have been enriched by having built strong relationships with people of all races and ethnicities. I’ve had the opportunity to interrupt stereotypes and prejudices that I didn’t even know I held about other people of color, and interrupting those prejudices helps me see us all as a part of the same effort.

Capitalism and racism have mostly forced people to live in segregated spaces. If I stayed in my neighborhood for a full day, I could go the entire time without seeing a white person. Similarly, in other neighborhoods, I could go a whole day without seeing a Black person or another person of color. ”

The United States Social Forum

“In 2007, I was still working with POWER. That June, we helped organize a delegation of thirty people for a trip to the United States Social Forum in Atlanta, Georgia. Half of our delegation was Black—some of whom were members of our Bayview Hunters Point Organizing Project—and the other half were immigrant Latina domestic workers.”

“I’d been a part of many national and international efforts by this time, including the last United States Social Forum, a major gathering of social justice activists that had taken place in Detroit a few years before. While those experiences had taught me a lot about how to build relationships with people with different backgrounds and agendas, that kind of work is also difficult. When you’re an outsider, it’s hard to build trust.”

“In 2007, I attended the United States Social Forum, where more than 10,000 activists and organizers converged to share strategies to interrupt the systems of power that impacted our everyday lives. It was one of my first trips with POWER, and I was eager to prove myself by playing a role in helping to coordinate our delegation of about thirty members, along with the staff. One day, the director of the organization invited me to attend a meeting with him.”

“The meeting was of a new group of Black organizers from coalitions across the country, joining to work together in service of Black people in a new and more systematic way. I was excited about the potential of what could happen if this meeting was successful. I was becoming politicized in this organization, learning more about the history of Black people’s efforts to live a dignified life, and I yearned to be part of a movement that had a specific focus on improving Black lives.

When we arrived, I looked around the room, and out of about a hundred people who were crowded together, there were only a handful of women. Literally: There were five Black women and approximately ninety-five Black men.

An older Black man called the meeting to order. I sat next to my co-worker, mesmerized and nervous. Why were there so few Black women here? I wondered. In our local organizing, most of the people who attended our meetings were Black women. The older Black man talked for about forty minutes. When he finally stopped talking, man after man spoke, long diatribes about what Black people needed to be doing, addressing our deficits  

“as a result of a sleeping people who had lost our way from who we really were. That feeling I used to get as a kid when my dad would yell to my mother or me to make him coffee began to bubble up inside me. Nervous but resolute, I raised my hand.

“So,” I began, “I appreciate what you all have had to say.” I introduced myself and the organization I was a part of, and then I continued: “I believe in the liberation you believe in, and I work every day for that. I heard you say a lot, but I didn’t hear you say anything about where women fit into this picture. Where do queer people fit in this vision you have for Black liberation?” I had just delivered my very own Sojourner Truth “Ain’t I a Woman?” speech, and the room fell silent.

It was hot in there. The air hung heavy in the packed room. People shifted uncomfortably in their seats. Some of the men in the room refused to make eye contact with me. Had I said something wrong? In the forty minutes the older man had spent talking, and the additional forty minutes the other men took up agreeing profusely over the liberation of Black men, not one mention was made of how Black people as a whole find freedom.

 It was as if when they talked about Black men, one should automatically assume that meant all Black people. I looked at him, at first with shyness and then, increasingly, with defiance. He started to talk about how important “the sisters” were to the project of Black liberation, but by then, for me, it was too late. The point had already been made. And there my impostor syndrome kicked in again. Who did this Black girl think she was, questioning the vision and the leadership of this Black man?”

On Revolutionary Theory and Practice

“Political education is a tool for understanding the political contexts we live in. It helps individuals and groups analyze the social and economic trends, the policies and the ideologies influencing our lives—and use this information to develop strategies to change the rules and transform power.

It comes in different forms. Popular education, developed by Brazilian educator Paulo Freire, is a form of political education where the “educator” and the “participants” engage in learning together to reflect on critical issues facing their communities and then take action to address those issues. I once participated in a workshop that used popular-education methods to explain exploitation in capitalism, and—despite two bachelor’s degrees, in anthropology and sociology—my world completely opened up. I’d taken classes that explored Marxist theory but had never learned how it came to life through Third World liberation struggles, how poor people in Brazil and South Africa and Vietnam used those theories to change their governments, change the rules, and change their conditions. Had I learned about those theories in ways that actually applied to my life, my context, my experience, I probably would have analyzed and applied them differently. Because the information had little context that interested me, I could easily dismiss it (mostly because I didn’t totally understand it) and miss an opportunity to see my world a little more clearly.”

On Education

“In this country, education has often been denied to parts of the population—for instance, Black students in the post–Reconstruction and Jim Crow eras, or students today in underfinanced and abandoned public schools. Given our complicated history with education, some people involved in movements for change don’t like the idea of education or political education as a way to build a base. 

This form of anti-intellectualism—the tendency to avoid theory and study when building movements—is a response to the fact that not everyone has had an equal chance to learn. But education is still necessary.

For those of us who want to build a movement that can change our lives and the lives of the people we care about, we must ask ourselves: How can we use political education to help build the critical thinking skills and analysis of those with whom we are building a base? We cannot build a base or a movement without education.

On Gramsci, Hegemony and Cultural Marxism

Antonio Gramsci was an Italian Marxist philosopher and politician whose work offers some important ideas about the essential role of political education. Gramsci was born in 1891 in Sardinia, Italy. He co-founded the Italian Communist Party and was imprisoned by Benito Mussolini’s fascist regime. While he was in prison, Gramsci wrote Prison Notebooks, a collection of more than thirty notebooks and 3,000 pages of theory, analysis, and history.

Gramsci is best known for his theories of cultural hegemony, a fancy term for how the state and ruling class instill values that are gradually accepted as “common sense”—in other words, what we consider to be normal or the status quo. Gramsci studied how people come to consent to the status quo. According to Gramsci, there are two ways that the state can persuade its subjects to do what it wants: through force and violence, or through consent. While the state does not hesitate to use force in pursuit of its agenda, it also knows that force is not a sustainable option for getting its subjects to do its will. Instead, the state relies on consent to move its agenda, and the state manufactures consent through hegemony, or through making its values, rules, and logic the “common sense” of the masses. In that way, individuals willingly go along with the state’s program rather than having to be coerced through violence and force.

This doesn’t mean that individuals are not also coerced through violence and force, particularly when daring to transgress the hegemony of the state. American hegemony is white, male, Christian, and heterosexual. That which does not support that common sense is aggressively surveilled and policed, sometimes through the direct violence of the state but most often through cultural hegemony.”

“Hegemony, in Gramsci’s sense, is mostly developed and reinforced in the cultural realm, in ways that are largely invisible but carry great power and influence. For example, the notion that pink is for girls and blue is for boys is a pervasive idea reinforced throughout society. If you ever look for a toy or clothing for a newborn assigned either a male sex or male gender, you find a preponderance of blue items. If boys wear pink, they are sometimes ostracized. This binary of pink for girls and blue for boys helps maintain rigid gender roles, which in turn reinforce the power relationships between the sexes. Transgressions are not looked upon favorably, because to disrupt these rules would be to disrupt the distribution of power between the sexes. To dress a girl-identified child in blue or to dress a boy in pink causes consternation or even violence. These are powerful examples of hegemony at work—implicit rules that individuals in a society follow because they become common sense, “just the way things are” or “the way they’re supposed to be.”

Hegemony is important to understand because it informs how ideas are adopted, carried, and maintained. We can apply an understanding of hegemony to almost any social dynamic—racism, homophobia, heterosexism, sexism, ableism. We have to interrupt these toxic dynamics or they will eat away at our ability to build the kinds of movements that we need. But to interrupt these toxic dynamics requires that we figure out where the ideas come from in the first place.”

“We have to dig into the underlying ideas and make the hegemonic common sense visible to understand how we can create real unity and allyship in the women’s movement.”

“There are examples unique to this political moment. Since the rise of the Black Lives Matter movement, hegemonic ideas have slowed our progress. One piece of hegemonic common sense is the idea that Black men are the central focus of Black Lives Matter and should be elevated at all times. The media rushed to anoint a young gay Black man as the founder of the movement, even though that was not the case. This same sort of prioritizing of Black men happened all over the country: young Black men elevated to the role of Black Lives Matter leaders, regardless of the work they’d actually put in. Why were they assigned these roles without justification? I believe it’s because hegemony in the United States assigns leadership roles to men. In Black communities in particular, leadership is assigned to Black men even when Black women are carrying the work, designing the work, developing the strategy, and executing the strategy. Symbolism can often present as substance, yet they are not the same. This is a case where an unexamined hegemonic idea caused damage and distortion.”

“They felt left out not just because of the undue influence of the corporate class and the elite but also because they perceived that the wealth, access, and power promised to them were being distributed to women, people of color, and queer people. Trump’s campaign relied on the hegemonic idea of who constituted the “real” America, who were the protagonists of this country’s story and who were the protagonists of this country’s story and who were the villains. The protagonists were disaffected white people, both men and women, and the villains were people of color, with certain communities afforded their own unique piece of the story.”

“Stripping away political correctness can also be seen in the campaign’s promised return to the way things were—a time when things were more simple and certain groups of people knew their place. These ideas are called hegemonic because they are embedded and reproduced in our culture. ”

“Culture and policy affect and influence each other, so successful social movements must engage with both. This isn’t a new idea—the right has been clear about the relationship between culture and policy for a very long time. It is one of the reasons they have invested so heavily in the realm of ideas and behavior. Right-wing campaigns have studied how to culturally frame their ideas and values as common sense.

Culture has long been lauded as an arena for social change—and yet organizers often dismiss culture as the soft work, while policy is the real work. But policy change can’t happen without changing the complex web of ideas, values, and beliefs that undergird the status quo. When I was being trained as an organizer, culture work was believed to be for people who could not handle real organizing. Nobody would say it out loud, but there was a hierarchy—with community organizing on top and cultural organizing an afterthought.”

“To be fair, some cultural work did fall into this category. After all, posters and propaganda distributed among the coalition of the already willing weren’t going to produce change as much as reinforce true believers.

When culture change happens, it is because movements have infiltrated the cultural arena and penetrated the veil beyond which every person encounters explicit and implicit messages about what is right and what is wrong, what is normal and what is abnormal, who belongs and who does not. When social movements engage in this arena, they subvert common ideas and compete with or replace them with new ideas that challenge so-called common sense.

Culture also offers an opportunity for the values and hegemony of the opposition to be exposed and interrogated. The veteran organizer and communications strategist Karlos Gauna Schmieder wrote that “we must lay claim to civil society, and fight for space in all the places where knowledge is produced and cultured.” By laying claim to civil society, we assert that there is an alternative to the white, male, Christian, heterosexual “common sense” that is the status quo—and we work to produce new knowledge that not only reflects our vision for a new society but also includes a new vision for our relationships to one another and to the planet.

It is this challenge, to lay claim to civil society and to fight for space in all of the places where knowledge is produced and cultured, that movements must take on with vigor, just as right-wing movements have tried to lay claim to those places to build their movement. Culture, in this sense, is what makes right-wing movements strong and compelling. It is what lays the groundwork for effective, sustained policy change.”

On Political Education

“Political education helps us make visible that which had been made invisible. We cannot expect to unravel common sense about how the world functions if we don’t do that work. Political education helps us unearth our commonly held assumptions about the world that keep the same power dynamics functioning the way they always have. It supports our ability to dream of other worlds and to build them. And it gives us a clearer picture of all that we are up against.”

On Political Strategy

“Building a movement means building alliances. Who we align with at any given time says a lot about what we are trying to build together and who we think is necessary to build it.

The question of alliances can be confusing. We might confuse short-term alliances with long-term ones. Or confuse whether the people we ally with on a single campaign need to be aligned with us on everything. But here’s the truth of the matter: The people we need to build alliances with are not necessarily people we will agree with on everything or even most things. And yet having a strategy, a plan to win, asks us to do things differently than we’ve done them before.”

[…]

“Popular fronts are alliances that come together across a range of political beliefs for the purpose of achieving a short-to-intermediate-term goal, while united fronts are long-term alliances based on the highest level of political alignment. The phrases are often used interchangeably but shouldn’t be.”

“A lot of activist coalitions these days take the form of popular fronts and come together around achieving a short-to-intermediate-term objective. ”

San Francisco Rising Alliance 

“We spent time together doing organizing exchanges, studying political theory and social movements, learning from one another’s organizing models, and taking action together. After about five years, this alliance grew into an even stronger one, known as San Francisco Rising—an electoral organizing vehicle designed to build and win real power for working-class San Francisco.”

On Political Strategy

“United fronts are helpful in a lot of ways, including being really clear about who is on the team. In some ways, united fronts are what we are working toward, why we organize: to build bigger and bigger teams of people aligned in strategy, vision, and values. But if I had to guess, I’d say that the next period will be characterized by a greater number of popular fronts, and I think this is a good thing.

Popular fronts help you engage with the world as it is, while united fronts offer the possibility of what could be. United fronts allow us to build new alternatives, to test new ideas together, because there is already a high level of trust, political clarity, and political unity. Popular fronts, however, teach us to be nimble, to build relationships across difference for the sake of our survival.

Popular fronts are important tools for organizers today. They match today’s reality: that those of us who want to see a country and a world predicated on justice and equality and the ability to live well and with dignity are not well represented ”

“among those who are making decisions over our lives. We are a small proportion of people who currently serve in the U.S. Congress, a small percentage of people who are mayors and governors, and a small percentage of people moving resources on your city council or board of education.

We are not the majority of the decision makers, even though we likely represent the majority in terms of what we all want for our futures. It is tempting in these times to double down on those closest to you, who already share your vision, share your values, share your politics. But to get things done, we are tasked to find places of common ground, because that is how we can attain the political power we lack.

Many people are uncomfortable with popular fronts because they are afraid that working with their opponents will dilute their own politics. I agree that popular fronts without united fronts are dangerous for this exact reason—without an anchor, without clarity about what you stand for and who you are accountable to, it can be difficult to maintain integrity and clarity when working with people who do not share your values and vision.”

On Creating Black Lives Matter 

“When Patrisse, Opal, and I created Black Lives Matter, which would later become the Black Lives Matter Global Network, each of us also brought our own understanding of platforms, pedestals, and profiles. At that point, we’d all spent ten years as organizers and advocates for social justice. Our platforms and profiles, and perhaps even pedestals, come from the relationships we have in our communities, the networks we are a part of, and the work we’ve done for migrant rights, transit justice, racial justice, economic justice, and gender justice. For nearly a year, we operated silently, using our networks and our experiences as organizers to move people to action, to connect them to resources and analysis, and to engage those who were looking for a political home. Our work was to tell a new story of who Black people are and what we care about, in order to encourage and empower our communities to fight back against state-sanctioned violence—and that meant our primary role, initially, was to create the right spaces for that work and connect people who wanted to do the work of organizing for change.

But when a well-known mainstream civil rights organization began to claim our work as their own, while distorting the politics and the values behind it, we decided to take control of our own narrative and place ourselves more prominently in our own story.”

On Political Strategy

“When I was being trained as an organizer, social media forums were not yet as popular and as widely used as they are today. Debates over strategy, outcomes, or even grievances took place in the form of “open letters,” often circulated through email. At the time, that world seemed vast and important, but in retrospect—compared to the global reach of social media—it was very, very small.

Yet even in my small corner of the world, there were those who went from being relatively unknown grassroots organizers to people with more power and influence. And I saw how the movement could be ambivalent toward its most visible members when those individuals were seen as having gone too far beyond the movement’s own small imprint.”

About the National Domestic Workers Alliance

“When Ai-jen Poo, currently the director of the National Domestic Workers Alliance and co-director of Caring Across Generations, built a profile and a platform based on her success leading domestic workers to win the first ever Domestic Workers Bill of Rights in New York State, it caused quiet rumblings within the movement that grew her. People were unsure if it was a good thing that her fame had outgrown our small corner of the world. When Van Jones remade himself from an ultra-left revolutionary into a bipartisan reformer who landed in the Obama administration as the “green jobs czar,” the movement that grew him quickly disavowed him. Even when Patrisse Cullors began to grow a platform and a profile beyond the work I’d known her for at the Bus Riders Union, a project of the Labor/Community Strategy Center in Los Angeles, I received a call from one of her mentors questioning her ability to “lead the Black liberation movement.” In one breath, movements in development and movements in full swing can become antagonistic to those who break through barriers to enter the mainstream, where they can expose the movement’s ideas to new audiences.

Throwing Shade at DeRay Mckesson

“DeRay Mckesson is often credited with launching the Black Lives Matter movement along with the work that Patrisse, Opal, and I initiated. However, Mckesson offers a sharp lesson on pedestals, platforms, and profiles—and why we need to be careful about assigning roles that are inaccurate and untrue.

Mckesson is someone I first met in Ferguson, Missouri, a full year after Patrisse, Opal, and I launched Black Lives Matter. How we met matters. Patrisse and Darnell Moore had organized a freedom ride whereby Black organizers, healers, lawyers, teachers, “and journalists gathered from all over the country to make their way to Ferguson. I flew to St. Louis to help support another organization on the ground there. The freedom ride coincided with the time I spent in St. Louis, and as I was being given the rundown on the landscape during my first few days there, I was told about a young man named DeRay Mckesson.

Mckesson played the role of a community journalist on the ground in Ferguson. He and Johnetta Elzie had started a newsletter called This Is the Movement, and I remember Mckesson approaching me at a meeting convened by what has since become the Movement for Black Lives and asking if they could interview the three of us about Black Lives Matter. ”

“He was criticizing Black Lives Matter, which was, at that time, fending off attacks from right-wing operatives who were trying to pin on us the actions of activists who had begun to call themselves Black Lives Matter but had not been a part of the organizing efforts we were building through a network structure that had chapters. These activists had led a march where people in the crowd were chanting “Pigs in a blanket, fry ’em like bacon.” The news media had been stirred up like a beehive over the comments, and our team was working furiously to clarify that not everyone who identifies as Black Lives Matter is a part of the formal organization. ”

“I cannot tell you how many times I have been at events where someone will approach me to say that they know the other co-founder of Black Lives Matter, DeRay Mckesson. ”

“One could argue that it’s difficult to distinguish, particularly when there are so many people who identify with the principles and values of Black Lives Matter. But those of us who are involved in the movement know the difference—we know the difference because we work with one another. We share the same ecosystem. We know the difference between the Movement for Black Lives, and the wide range of organizations that comprise that alliance, and the larger movement for Black liberation.”

“I explained to her that while Mckesson was an activist, he was not a co-founder of Black Lives Matter.

I wish that these were innocent mistakes, but they’re not. Characterizing these misstatements as misunderstandings is gaslighting of the highest degree. Mckesson was a speaker at a Forbes magazine event, “Forbes 30 under 30,” and was listed in the program as the co-founder of Black Lives Matter, yet he wasn’t in a rush to correct the mistake—and certainly didn’t address the mistake in any comments he made that day. There was an outcry on social media, which forced Mckesson to contact the planners and have them change the description. But had there not been an outcry by people sick of watching the misleading dynamic, there wouldn’t have been any change.”

“Tarana Burke wrote an article about this misrepresentation in 2016 in The Root, a year before the #MeToo movement swept the country, criticizing Mckesson for allowing his role to be overstated. She cites a Vanity Fair “new establishment” leaders list on which Mckesson is No. 86 and accompanied by the following text:

Crowning achievement: Transforming a Twitter hashtag, #BlackLivesMatter, into a sustained, multi-year, national movement calling for the end of police killings of African-Americans. He may have lost a bid to become Baltimore’s next mayor, but he is the leader of a movement.”

“Some will be tempted to dismiss this recounting as petty, or selfish, or perhaps more a function of ego than the unity that is needed to accomplish the goals of a movement. The problem with that view is that conflicts and contradictions are also a part of movements, and ignoring them or just pleading for everyone to get along doesn’t deal with the issues—it buries them for the sake of comfort, at the expense of the clarity that is needed to really understand our ecosystem and the wide range of practices, politics, values, and degrees of accountability inside it.

Movements must grapple with the narration of our stories—particularly when we are not the ones telling them. Movements must grapple with their own boundaries, clarifying who falls within them and who falls outside them. Movements must be able to hold conflict with clarity. 

“When in his book Mckesson credits a relatively unknown UCLA professor with the creation of the #BlackLivesMatter hashtag, he doesn’t do so for the purpose of clarity—he does it to unseat and deliberately discredit the roles that Patrisse, Opal, and I, along with many, many others, have played in bringing people together to take action and engaging our communities around a new theory of who Black life encompasses and why that matters for our liberation. And in many ways he does it for the purpose of attempting to justify the ways in which he inflates his own role in Black Lives Matter.”

On the Movement for Black Lives

“I met Charlene Carruthers, the first national director of the Black Youth Project 100, when I was still the executive director at POWER in San Francisco. I had no idea that the Black Youth Project would establish itself as a leading organization in the Movement for Black Lives until nearly two years after they were founded. As we were launching Black Lives Matter as a series of online platforms, the Dream Defenders, with which I was unfamiliar, and Power U, with which I was very familiar, were taking over the Florida State Capitol, demanding an end to the Stand Your Ground law. I met the director of the Dream Defenders, at that time Phillip Agnew, at a Black Alliance for Just Immigration gathering in Miami in 2014, just a few months before Ferguson erupted. I remember being in Ferguson when a young activist asked me with distrust if I’d ever heard of the Organization for Black Struggle. I had, of course, not only heard of them but sat at the feet of a well-known leader of that organization, “Mama” Jamala Rogers. Our reality is shaped by where and when we enter at any given moment.

“We have allowed Mckesson to overstate his role, influence, and impact on the Black Lives Matter movement because he is, in many ways, more palatable than the many people who helped to kick-start this iteration of the movement. He is well branded, with his trademark blue Patagonia vest that helps you identify him in a sea of people all claiming to represent Black Lives Matter. He is not controversial in the least, rarely pushing the public to move beyond deeply and widely held beliefs about power, leadership, and impact. He is edgy enough in his willingness to document protests and through that documentation claim that he played a larger role in them than he did, and yet complaisant enough to go along to get along. He does not make power uncomfortable.”

“We have to start crediting the work of Black women and stop handing that credit to Black men. We can wax poetic about how the movement belongs to no one and still interrogate why we credit Black men like DeRay Mckesson as its founder, or the founder of the organization that Patrisse, Opal, and I created.”

“It’s ahistorical and it serves to only perpetuate the erasure of Black women’s labor, strategy, and existence.”

“I used to be a cynic. As I was developing my worldview, developing my ideas, working in communities, I used to believe that there was no saving America, and I had no desire to lead America.

Over the last decade, that cynicism has transformed into a profound hope. It’s not the kind of hope that merely believes that there is something better out there somewhere, like the great land of Oz. It is a hope that is clear-eyed, a hope that propels me. It is the hope that organizers carry, a hope that understands that what we are up against is mighty and what we are up against will not go away quietly into the night just because we will it so.

No, it is a hope that knows that we have no other choice but to fight, to try to unlock the potential of real change.”

Black Futures Lab

“These days, I spend my time building new political projects, like the Black Futures Lab, an innovation and experimentation lab that tests new ways to build, drive, and transform Black power in the United States. At the BFL, we believe that Black people can be powerful in every aspect of our lives, and politics is no exception.

I was called to launch this organization after the 2016 presidential election. After three years of building the Black Lives Matter Global Network and fifteen years of grassroots organizing in Black communities, I felt strongly that our movement to ensure popular participation, justice, and equity needed relevant institutions that could respond to a legacy of racism and disenfranchisement while also proactively engaging politics as it is in order to create the conditions to win politics as we want it to be. ”

“For the majority of 2018, the Black Futures Lab worked to mobilize the largest data project to date focused on the lives of Black people. We called it the Black Census Project and set out to talk to as many Black people as possible about what we experience in the economy, in society, and in democracy. We also asked a fundamental question that is rarely asked of Black communities: What do you want in your future?

We talked to more than 30,000 Black people across the United States: Black people from different geographies, political ideologies, sexualities, and countries of origin, and Black people who were currently incarcerated and who were once incarcerated. A comprehensive survey such as this had not been conducted in more than 154 years. We partnered with more than forty Black-led organizations across the nation and trained more than one hundred Black organizers in the art and science of community organizing. We collected responses online and offline.”

On Morning Rituals

“Every morning when I wake up, I pray. I place my head against the floor and I thank my God for allowing me to see another day. I give thanks for the blessings that I have received in life, I ask for forgiveness for all of the ways in which I am not yet the person I want to be, and I ask for the continued blessings of life so that I can work to get closer to where I want to be. And in my prayers, I ask my God to remind me that the goal is not to get ahead of anyone else but instead to live my life in such a way that I remember we must make it to the other side together.”

 

Notes on Activity Based Intelligence Principles and Applications

Activity Based Intelligence Principles and Applications

ABI represents a fundamentally different way of doing intelligence analysis, one that is important in its own terms but that also offers the promise of creatively disrupting what is by now a pretty tired paradigm for thinking about the intelligence process.

ABI enables discovery as a core principle. Discovery—how to do it and what it means—is an exciting challenge, one that the intelligence community is only beginning to confront, and so this book is especially timely.
The prevailing intelligence paradigm is still very linear when the world is not: Set requirements, collect against those requirements, then analyze. Or as one wag put it: “Record, write, print, repeat.”

ABI disrupts that linear collection, exploitation, dissemination cycle of intelligence. It is focused on combining data—any data—where it is found. It does not prize data from secret sources but combines unstructured text, geospatial data, and sensor-collected intelligence. It marked an important passage in intelligence fusion and was the first manual evolution of “big data” analysis by real practitioners. ABI’s initial focus on counterterrorism impelled it to develop patterns of life on individuals by correlating their activities, or events and transactions in time and space.

ABI is based on four fundamental pillars that are distinctly different from other intelligence methods. The first is georeference to discover. Sometimes the only thing data has in common is time and location, but that can be enough to enable discovery of important correlations, not just reporting what happened. The second is sequence neutrality: We may find a critical puzzle piece before we know there is a puzzle. Think how often that occurs in daily life, when you don’t really realize you were puzzled by something until you see the answer.

The third principle is data neutrality. Data is data, and there is no bias toward classified secrets. ABI does not prize exquisite data from intelligence sources over other sources the way that the traditional paradigm does. The fourth principle comes full circle to the first: integrate before exploitation. The data is integrated in time and location so it can be discovered, but that integration happens before any analyst turns to the data.

ABI necessarily has pushed advances in dealing with “big data,” enabling technologies that automate manual workflows, thus letting analysts do what they do best. In particular, to be discoverable, the metadata, like time and location, have to be normalized. That requires techniques for filtering metadata and drawing correlations. It also requires new techniques for visualization, especially geospatial visualization, as well as tools for geotemporal pattern analysis. Automated activity extraction increases the volume of georeferenced data available for analysis.

ABI is also enabled by new algorithms for correlation and fusion, including rapidly evolving advanced modelingand machine learning techniques.

ABI came of age in the fight against terror, but it is an intelligence method that can be extended to other problems—especially those that require identifying the bad guys among the good in areas like counternarcotics or maritime domain awareness. Beyond that, ABI’s emphasis on correlation instead of causation can disrupt all-too-comfortable assumptions. Sure, analysts will find lots of spurious correlations, but they will also find intriguing connections in interesting places, not full-blown warnings but, rather, hints about where to look and new connections to explore.

This textbook describes a revolutionary intelligence analysis methodology using approved, open-source, or commercial examples to introduce the student to the basic principles and applications of activity-based intelligence (ABI).

Preface

Writing about a new field, under the best of circumstances, is a difficult endeavor. This is doubly true when writing about the field of intelligence, which by its nature must operate in the shadows, hidden from the public view. Developments in intelligence, particularly in analytic tradecraft, are veiled in secrecy in order to protect sources and methods;

Activity-Based Intelligence: Principles and Applications is aimed at students of intelligence studies, entry-level analysts, technologists, and senior-level policy makers and executives who need a basic primer on this emergent series of methods. This text is authoritative in the sense that it documents, for the first time, an entire series of difficult concepts and processes used by analysts during the wars in Iraq and Afghanistan to great effect. It also summarizes basic enabling techniques, technologies, and methodologies that have become associated with ABI.

1

Introduction and Motivation

By mid 2014, the community was once again at a crossroads: the dawn of the fourth age of intelligence. This era is dominated by diverse threats, increasing change, and increasing rates of change. This change also includes an explosion of information technology and a convergence of telecommunications, location-aware services, and the Internet with the rise of global mobile computing. Tradecraft for intelligence integration and multi-INT dominates the intelligence profession. New analytic methods for “big data” analysis have been implemented to address the tremendous increase in the volume, velocity, and variety of data sources that must be rapidly and confidently integrated to understand increasingly dynamic and complex situations. Decision makers in an era of streaming real-time information are placing increasing demands on intelligence professionals to anticipate what may happen…against an increasing range of threats amidst an era of declining resources. This textbook is an introduction to the methods and techniques for this new age of intelligence. It leverages what we learned in the previous ages and introduces integrative approaches to information exploitation to improve decision advantage against emergent and evolving threats.

Dynamic Change and Diverse Threats

Transnational criminal organizations, terrorist groups, cyberactors, counterfeiters, and drug lords increasingly blend together; multipolar statecraft is being rapidly replaced by groupcraft.
The impact of this dynamism is dramatic. In the Cold War, intelligence focused on a single nation-state threat coming from a known location. During the Global War on Terror, the community aligned against a general class of threat coming from several known locations, albeit with ambiguous tactics and methods. The fourth age is characterized by increasingly asymmetric, unconventional, unpredictable, proliferating threats menacing and penetrating from multiple vectors, even from within. Gaining a strategic advantage against these diverse threats requires a new approach to collecting and analyzing information.

1.1.2 The Convergence of Technology and the Dawn of Big Data

Information processing and intelligence capabilities are becoming democratized.

In addition to rapidly proliferating intelligence collection capabilities, the fourth age of intelligence coincided with the introduction of the term “big data.” Big data refers to high-volume, high-velocity data that is difficult to process, store, and analyze with traditional information architectures. It is thought that the term was first used in an August 1999 article in Communications of the ACM [16]. The McKinsey Global Institute calls big data “the next frontier for innovation, competition, and productivity” [17]. New technologies like crowdsourcing, data fusion, machine learning, and natural language processing are being used in commercial, civil, and military applications to improve the value of existing data sets and to derive a competitive advantage. A major shift is under way from technologies that simply store and archive data to those that process it—including real-time processing of multiple “streams” of information.

1.1.3 Multi-INT Tradecraft: Visualization, Statistics, and Spatiotemporal Analysis

Today, the most powerful computational techniques are being developed for business
intelligence, high-speed stock trading, and commercial retailing. These are analytic techniques—which intelligence professionals call their “tradecraft”—developed in tandem with the “big data” information explosion. They differ from legacy analysis techniques because they are visual, statistical, and spatial.

The emerging field of visual analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [20, p. 4]. It recognizes that humans are predisposed to recognize trends and patterns when they are presented using consistent and creative cognitive and perceptual techniques. Technological advances like high-resolution digital displays, powerful graphics cards and graphics processing units, and interactive visualization and human-machine interfaces have changed the way scientists and engineers analyze data. These methods include three-dimensional visualizations, clustering algorithms, data filtering techniques, and the use of color, shape, and motion to rapidly convey large volumes of information.

Next came the fusion of visualization techniques with statistical methods.

Analysts introduced methods for statistical storytelling, where mathematical functions are applied through a series of steps to describe interesting trends, eliminate infeasible alternatives, and discover anomalies so that decision makers can visualize and understand a complex decision space quickly and easily.

Geographic information systems (GISs) and the science of geoinformatics had been used since the late 1960s to display spatial information as maps and charts.

Increasingly, software tools like JMP, Tableau, GeoIQ, MapLarge, and ESRI ArcGIS have included advanced spatial and temporal analysis tools that advance the science of data analysis. The ability to analyze trends and patterns over space and time is called spatiotemporal analysis.

1.1.4 The Need for a New Methodology
The fourth age of intelligence is characterized by the changing nature of threats, the convergence in information technology, and the availability of multi-INT analytic tools—three drivers that create the conditions necessary for a revolution in intelligence tradecraft. This class of methods must address nonstate actors, leverage technological advances, and shift the focus of intelligence from reporting the past to anticipating the future. We refer to this revolution as ABI, a method that former RAND analyst and National Intelligence Council Greg Treverton chairman has called the most important intelligence analytic method coming out of the wars in Iraq and Afghanistan.

1.2 Introducing ABI
Intelligence analysts deployed to Iraq and Afghanistan to hunt down terrorists found that traditional intelligence methods were ill-suited for the mission. The traditional intelligence cycle begins with the target in mind (Figure 1.3), but terrorists were usually indistinguishable from other people around them. The analysts—digital natives savvy in visual analytic tools—began by integrating already collected data in a geographic area. Often, the only common metadata between two data sets was time and location so they applied spatiotemporal analytic methods to develop trends and patterns from large, diverse data sets. These data sets described activities: events and transactions conducted by entities (people or vehicles) in an area.

Often, the only common metadata between two data sets was time and location so they applied spatiotemporal analytic methods to develop trends and patterns from large, diverse data sets. These data sets described activities: events and transactions conducted by entities (people or vehicles) in an area. Sometimes, the analysts would discover a series of unusual events that correlated across data sets. When integrated, it represented the pattern of life of an entity. The entity sometimes became a target. The subsequent collection and analysis on this entity, the resolution of identity, and the anticipation of future activities based on the pattern of life produced a new range of intelligence products that improved the effectiveness of the counterterrorism mission. This is how ABI got its name.

ABI is a new methodology—a series of analytic methods and enabling technologies—based on the following four empirically derived principles, which are distinct from traditional intelligence methods.
• Georeference to discover: Focusing on spatially and temporally correlating multi-INT data to discover key events, trends, and patterns.
• Data neutrality: Prizing all data, regardless of the source, for analysis.
• Sequence neutrality: Realizing that sometimes the answer arrives before you ask the question.

While various intelligence agencies, working groups, and government bodies have offered numerous definitions for ABI, we define it as “a set of spatiotemporal analytic methods to discover correlations, resolve unknowns, understand networks, develop knowledge, and drive collection using diverse multi-INT data sets.”

ABI’s most significant contribution to the fourth age of intelligence is a shift in focus of the intelligence process from reporting the known to discovery of the unknown.
• Integration before exploitation: Correlating data as early as possible, rather than relying on vetted, finished intelligence products, because seemingly insignificant events in a single INT may be important when integrated across multi-INT.

1.2.1 The Primacy of Location
When you think about it, everything and everybody has to be somewhere.
—The Honorable James R. Clapper1, 2004

The primacy of location is the central principle behind the new intelligence methodology ABI. Since everything happens somewhere, all activities, events, entities, and relationships have an inherent spatial and temporal component whether it is known a priori or not.
Hard problems cannot usually be solved with a single data set. The ability to reference multiple data sets across multiple intelligence domains— multi-INT—is a key enabler to resolve entities that lack a signature in any single domain of collection. In some cases, the only common metadata between two data sets is location and time— allowing for location-based correlation of the observations in each data set where the strengths of one compensate for the weaknesses in another.

…the tipping point for the fourth age and key breakthrough for the ABI revolution was the ability and impetus to integrate the concept of location into visual and statistical analysis of large, complex data sets. This was the key breakthrough for the revolution that we call ABI.

1.2.2 From Target-Based to Activity-Based
The paradigm of intelligence and intelligence analysis has changed, driven primarily by the shift in targets from the primacy of nation-states to transnational groups or irregular forces—Greg Treverton, RAND

A target can be a physical location like an airfield or a missile silo. Alternatively, it can be an electronic target, like a specific radio-frequency emission or a telephone number. Targets can be individuals, such as spies who you want to recruit. Targets might be objects like specific ships, trucks, or satellites. In the cyberdomain, a target might be an e-mail address, an Internet protocol (IP) address, or even a specific device. The target is the subject of the intelligence question. The linear cycle of planning and direction, collection, processing and exploitation, analysis and production, and dissemination begins and ends with the target in mind.
The term “activity-based” is the antithesis of the “target-based” intelligence model. This book describes methods and techniques for intelligence analysis when the target or the target’s characteristics are not known a priori. In ABI, the target is the output of a deductive analytic process that begins with unresolved, ambiguous entities and a data landscape dominated by events and transactions.

Targets in traditional intelligence are well-defined, predictable adversaries with a known doctrine. If the enemy has a known doctrine, all you have to do is steal the manual and decode it, and you know what they will do.

In the ABI approach, instead of scheduled collection, incidental collection must be used to gather many (possibly irrelevant) events, transactions, and observations across multiple domains. In contrast to the predictable, linear, inductive approach, analysts apply deductive reasoning to eliminate what the answer is not and narrow the problem space to feasible alternatives. When the target blends in with the surroundings, a durable, “sense-able” signature may not be discernable. Proxies for the entity, such as a communications device, a vehicle, a credit card, or a pattern of actions, are used to infer patterns of life from observations of activities and transactions.

Informal collaboration and information sharing evolved as geospatial analysis tools became more democratized and distributed. Analysts share their observations—layered as dots on a map—and tell spatial stories about entities, their activities, their transactions, and their networks.

While traditional intelligence has long implemented techniques for researching, monitoring, and searching, the primary focus of ABI methods is on discovery of the unknown, which represents the hardest class of intelligence problems.

1.2.3 Shifting the Focus to Discovery
All truths are easy to understand once they are discovered; the point is to discover them.
—Galileo Galilei

The lower left corner of Figure 1.4 represents known-knowns: monitoring. These are known locations or targets, and the focus of the analytic operation is to monitor them for change.
the targets, location, behaviors, and signatures are all known. The intelligence task is monitoring the location for change and alerting when there is activity.

The next quadrant of interest is in the upper left of Figure 1.4. Here, the behaviors and signatures are unknown, but the targets or locations are known.

The research task builds deep contextual analytic knowledge to enhance understanding of known locations and targets, which can then be used to identify more targets for monitoring and enhance the ability to provide warning.

The lower right quadrant of Figure 1.4, search, requires looking for a known signature/behavior in an unknown location.
Searching previously undiscovered areas for the new equipment is search. For obvious reasons, this laborious task is universally loathed by analysts.

The “new” function and the focus of ABI methods is the upper right. You don’t know what you’re looking for, and you don’t know where to find it. This has always been the hardest problem for intelligence analysts, and we characterize it as “new” only because the methods, tools, policies, and tradecraft have only recently evolved to the point where discovery is possible outside of simple serendipity.

Discovery is a data-driven process. Analysts, ideally without bias, explore data sets to detect anomalies, characterize patterns, investigate interesting threads, evaluate trends, eliminate the impossible, and formulate hypotheses.

Typically, analysts who excel at discovery are detectives. They exhibit unusual curiosity, creativity, and critical thinking skills. Generally, they tend to be rule breakers. They get bored easily when tasked in the other three quadrants. New tools are easy for them to use. Spatial thinking, statistical analysis, hypothesis generation, and simulation make sense. This new generation of analysts—largely comprised of millennials hired after 9/11— catalyzed the evolution of ABI methods because they were placed in an environment that required a different approach. Frankly, their lack of experience with the traditional intelligence process created an environment where something new and different was possible.

1.2.4 Discovery Versus Search

Are we saying that hunting terrorists is the same as house shopping? Of course not, but the processes have their similarities. Location (and spatial analysis) is central to the search, discovery, research, and monitoring process. Browsing metadata helps triage information and focus the results. The problem constantly changes as new entities appear or disappear. Resources are limited and it’s impossible to action every lead…

1.2.6 Summary: The Key Attributes of ABI
ABI is a new tradecraft, focused on discovering the unknown, that is well-suited for advanced multi-INT analysis of nontraditional threats in a “big data” environment.

1.3 Organization of this Textbook
This textbook is directed at entry-level intelligence professionals, practicing engineers, and research scientists familiar with general principles of intelligence and analysis. It takes a unique perspective on the emerging methods and techniques of ABI with a specific focus on spatiotemporal analytics and the associated technology enablers.

The seminal concept of “pattern of life” is introduced in Chapter 8. Chapter 8 exposes the nuances of “pattern of life” versus pattern analysis and describes how both concepts can be used to understand complex data and draw conclusions using the activities and transactions of entities. The final key concept, incidental collection, is the subject of Chapter 9. Incidental collection is a core mindset shift from target-based point collection to wide area activity-based surveillance.

A unique feature of this textbook is its focus on applications from the public domain.

1.4 Disclaimer About Sources and Methods
Protecting sources and methods is the most paramount and sacred duty of intelligence professionals. This central tenet will be carried throughout this book. The development of ABI was catalyzed by advances in commercial data management and analytics technology applied to unique sources of data. Practitioners deployed to the field have the benefit of on-the-job training and experience working with diverse and difficult data sets. A primary function of this textbook is to normalize understanding across the community and inform emerging intelligence professionals of the latest advances in data analysis and visual analytics.

All of the application examples in this textbook are derived from information entirely in the public domain. Some of these examples have corollaries to intelligence operations and intelligence functions. Some are merely interesting applications of the basic principles of ABI to other fields where multisource correlation, patterns of life, and anticipatory analytics are commonplace. Increasingly, commercial companies are using similar “big data analytics” to understand patterns, resolve unknowns, and anticipate what may happen.

1.6 Suggested Readings

Readers unfamiliar with intelligence analysis, the disciplines of intelligence, and the U.S. intelligence community are encouraged to review the following texts before delving deep into the world of ABI:
• Lowenthal, Mark M., Intelligence: From Secrets to Policy. Lowenthal’s legendary text is the premier introduction to the U.S. intelligence community, the primary principles of intelligence, and the intelligence relationship to policy. The frequently updated text has been expanded to include Lowenthal’s running commentary on various policy issues including the Obama administration, intelligence reform, and Wikileaks. Lowenthal, once the assistant director of analysis at the CIA and vice chairman of Evaluation for the National Intelligence Council, is the ideal intellectual mentor for an early intelligence professional.
• George, Roger Z., and James B. Bruce, Analyzing Intelligence: Origins, Obstacles, and Innovations. This excellent introductory text by two Georgetown University professors is the most comprehensive text on analysis currently in print. It provides an overview of analysis tradecraft and how analysis is used to produce intelligence, with a focus on all-source intelligence.
• Heuer, Richards J., The Psychology of Intelligence Analysis. This book is required reading for intelligence analysts and documents how analysts think. It introduces the method of analysis of competing hypotheses (ACH) and deductive reasoning, a core principle of ABI.
• Heuer, Richards J., and Randolph H. Pherson, Structured Analytic Techniques for Intelligence Analysis. An extension of Heuer’s previous work, this is an excellent handbook of techniques for all-source analysts. Their techniques pair well with the spatiotemporal analytic methods discussed in this text.
• Waltz, Edward, Quantitative Intelligence Analysis: Applied Analytic Models, Simulations, and Games. Waltz’s highly detailed book describes modern modeling techniques for intelligence analysis. It is an essential companion text to many of the analytic methods described in Chapters 12–16.

2
ABI History and Origins

Over the past 15 years, ABI has entered the intelligence vernacular. Former NGA Letitia Long, said it is “a new foundation for intelligence analysis, as basic and as important as photographic interpretation and imagery analysis became during World War II”

2.1 Wartime Beginnings
ABI methods have been compared to many other disciplines including submarine hunting and policing, but the modern concepts of ABI trace their roots to the Global War on Terror. According to Long, “Special operations led the development of GEOINT-based multi-INT fusion techniques on which ABI is founded”

2.2 OUSD(I) Studies and the Origin of the Term ABI
During the summer of 2008 the technical collection and analysis (TCA) branch within the OUSD(I) determined the need for a document defining “persistent surveillance” in support of irregular warfare. The initial concept was a “pamphlet” that would briefly define persistence and expose the reader to the various surveillance concepts that supported this persistence. U.S. Central Command, the combatant command with assigned responsibility throughout the Middle East, expressed interest in using the pamphlet as a training aid and as a means to get its components to use the same vocabulary.

ABI was formally defined by the now widely circulated “USD(I) definition”:
A discipline of intelligence, where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, a population, or an area of interest

There are several key elements of this definition. First, OUSD(I) sought to define ABI as a separate discipline of intelligence like HUMINT or SIGINT: SIGINT is to the communications domain as activity-INT is to the human domain. Recognizing that the INTs are defined by an act of Congress, this definition was later softened into a “method” or “methodology.”
The definition recognizes that ABI is focused on activity (composed of events and transactions, further explored in Chapter 4) rather than a specific target. It introduces the term entity, but also recognizes that analysis of the human domain could include populations or areas, as recognized by the related study called “human geography.”

Finally, the definition makes note of analysis and subsequent collection, also sometimes referred to as analysis driving collection. This emphasizes the importance of analysis over collection—a dramatic shift from the traditional collection-focused mindset of the intelligence community. To underscore the shift in focus from targets to entities, the paper introduced the topic of “human domain analytics.”

2.3 Human Domain Analytics
Human domain analytics is the global understanding of anything associated with people. The human domain provides the context and understanding of the activities and transactions necessary to resolve entities in the ABI method.

• The first is biographical information, or “who they are.” This includes information directly associated with an individual.
• The second data type is activities, or “what they do.” This data category associates specific actions to an entity.
• The third data category is relational, or “who they know,” the entities’ family, friends, and associates.
• The final data category is contextual (metaknowledge), which is information about the context or the environment in which the entity is found.

Examples include most of the information found within the sociocultural/human terrain studies. Taken in total, these data categories support ABI analysts in the analysis of entities, identity resolution of unknown entities, and placing the entities actions in a social context.

2.5 ABI-Enabling Technology Accelerates

In December 2012, BAE Systems was awarded a multiyear $60-million contract to provide “ABI systems, tools, and support for mission priorities” under the agency’s total application services for enterprise requirements (TASER) contract [13]. While these technology developments would bring new data sources to analysts, they also created confusion as the tools became conflated with the analytical methodology they were designed around. The phrase “ABI tool” would be attached to M111 and its successor program awarded under TASER.

2.6 Evolution of the Terminology
The term ABI and the introduction of the four pillars was first mentioned to the unclassified community during an educational session hosted by the U.S. Geospatial Intelligence Foundation (USGIF) at the GEOINT Symposium in 2010, but the term was introduced broadly in comments by Director of National Intelligence (DNI) Clapper and NGA director Long in their remarks at the 2012 symposium [14, 15].

As wider intelligence community efforts to adapt ABI to multiple missions took shape, the definition of ABI became generalized and evolved to a broader perspective as shown in Table 2.1. NGA’s Gauthier described it as “a set of methods for discovering patterns of behavior by correlating activity data at network speed and enormous scale” [16, p. 1]. It was also colloquially described by Gauthier and Long as, “finding things that don’t want to be found.”

2.7 Summary
Long described ABI as “the most important intelligence methodology of the first quarter of the 21st century,” noting the convergence of cloud computing technology, advanced tracking algorithms, inexpensive data storage, and revolutionary tradecraft that drove adoption of the methods [1].

3
Discovering the Pillars of ABI
The basic principles of ABI have been categorized as four fundamental “pillars.” These simple but powerful principles were developed by practitioners by cross-fertilizing best practices from other disciplines and applying them to intelligence problems in the field. They have evolved and solidified over the past five years as a community of interest developed around the topic. This chapter describes the origin and practice of the four pillars: georeference to discover, data neutrality, sequence neutrality, and integration before exploitation.

3.1 The First Day of a Different War
The U.S. intelligence community and most of the broader U.S. and western national security apparatus, was created to fight—and is expertly tuned for—the bipolar, state-centric conflict of the Cold War. Large states with vast bureaucracies and militaries molded in their image dominated the geopolitical landscape.

3.2 Georeference to Discover: “Everything Happens Somewhere”
Georeference to discover is the foundational pillar of ABI. It was derived from the simplest of notions but proves that simple concepts have tremendous power in their application.

Where activity happens—the spatial component—is the one aspect of these diverse data that is (potentially) common. The advent of the global positioning system (GPS)—and perhaps most importantly for the commercial realm, the de-activation of a mode called “selective availability”—has made precisely capturing “where things happen” move from the realm of science fiction to the reality of day-to-day living. With technological advances, location has become knowable.

3.2.1 First-Degree Direct Georeference
The most straightforward of these is direct georeferencing, which is where machine-readable geospatial content in the form of a coordinate system or known cadastral system is present in the metadata of a type of information. An example of this is metadata (simply, “data about data”) of a photo a GPS-enabled handheld camera or cell phone, for example, might give a series of GPS coordinates in degrees-minutes-seconds format.

3.2.2 First-Degree Indirect Georeference
By contrast, indirect georeferencing contains spatial information in non-machine-readable content, not ready for ingestion into a GIS.

an example of a metadata-based georeference in the same context would be a biographical profile of John Smith with the metadata tag “RESIDENCE: NOME, ALASKA.”

3.2.3 Second-Degree Georeference
Further down the georeferencing rabbit hole is the concept of a second-degree georeference. This is a special case of georeferencing where the content and metadata contain no first-degree georeferences, but analysis of the data in its context can provide a georeference.

For example, a poem about a beautiful summer day might not contain any first-degree georeferences, as it describes only a generic location. By reconsidering the poem as the “event” of “poem composition, a georeference can be derived. Because the poet lived at a known location, and the date of the poem’s composition is also known, the “poem composition event” occurred at “the poet’s house” on “the date of composition,” creating a second-degree georeference for a poem [5].
The concept of second-degree georeferencing is how we solve the vexing problem of data that does not appear, at first glance, to be “georeferenceable.” The above example shows how, by deriving events from data, we can identify activity that is more easily georeferenceable. This is one of the strongest responses to critics of the ABI methodology who argue that much, if not most, data does not lend itself to the georeferencing and overall data- conditioning process.

3.3 Discover to Georeference Versus Georeference to Discover
It is also important to contrast the philosophy of georeference to discover with the more traditional mindset of discover to georeference. Discover to georeference is a concept often not given a name but aptly describes the more traditional approach to geographically referencing information. This traditional process, based on keyword, relational, or Boolean-type queries, is illustrated in Figure 3.2. Often, the georeferencing that occurs in this process is manual, done via copy-paste from free text documents accessible to analysts.

With discover to georeference, the first question that is asked, often unconsciously, is, “This is an interesting piece of information; I should find out where it happened.” It can also be described as “pin-mapping,” based on the process of placing pins in a map to describe events of interest. The key difference is the a priori decision that a given event is relevant or irrelevant before the process of georeferencing begins.

Using the pillar of georeference to discover, the act of georeferencing is an integral part of the act of processing data, through either first- or second-degree attributes. It is the first step of the ABI analytic process and begins before the analyst ever looks at the data.

The act of georeferencing creates an inherently spatial and temporal data environment in which ABI analysts spend the bulk of their time, identifying spatial and temporal co-occurrences and examining said co-occurrences to identify correlations. This environment naturally leads the analyst to seek more sources of data to improve correlations and subsequent discovery.

3.4 Data Neutrality: Seeding the Multi-INT Spatial Data Environment

Data neutrality is the premise that all data may be relevant regardless of the source from which it was obtained. This is perhaps the most overlooked of the pillars of ABI because it is so simple as to be obvious. Some may dismiss this pillar as not important to the overall process of ABI, but it is central to the need to break down the cultural and institutional barriers between INT-specific “stovepipes” and consider all possible sources for understanding entities and their activities.

as the pillars were being developed, the practitioners who helped to write much of ABI’s lexicon spoke of data neutrality as a goal instead of a consequence. The importance of this distinction will be explored below, as it relates to the first pillar of georeference to discover.

Imagine again you are the analyst described in the prior section. In front of you is a spatial data environment in your GIS consisting of data obtained from many different sources of information, everything from reports from the lowest level of foot patrols to data collected from exquisite national assets. This data is represented as vectors: dots and lines (events and transactions) on your map. As you begin to correlate data via spatial and temporal attributes, you realize that data is data, and no one data source is necessarily favored over the others. The second pillar of ABI serves to then reinforce the importance of the first and naturally follows as a logical consequence.

Given that the act of data correlation is a core function of ABI, the conclusion that there can never be “too much” data is inevitable. “Too much,” in the inexact terms of an analyst, often means “more than I have the time, inclination, or capacity to understand,” but more often than that means “data that is not in a format conducive to examination in a single environment.” This becomes an important feature in understanding the data discovery mindset.

As the density of data increases, the necessity for smart technology for attribute correlation becomes a key component of the technical aspects of ABI. This challenge is exacerbated by the fact that some data sources include inherent uncertainty and must be represented by fuzzy boundaries, confidence bands, spatial polygons, ellipses, or circles representing circular error probability (CEP).
The spatial and temporal environment provides two of the three primary data filters for the ABI methodology: correlation on location and correlation on attributes. Attribute-based correlation becomes important to rule out false-positive correlations that have occurred solely based on space and time.

the nature of many data sources almost always requires human judgment regarding correlation across multiple domains or sources of information. Machine learning continues to struggle with these especially as it is difficult to describe the intangible context in which potential correlations occur.

Part of the importance of the data neutrality mindset is realizing the unique perspective that analysts bring to data analysis; moreover, this perspective cannot be easily realized in one type of analyst but is at its core the product of different perspectives collaborating on a similar problem set. This syncretic approach to analysis was central to the revolution of ABI, with technical analysts from two distinct intelligence disciplines collaborating and bringing their unique perspectives to their counterparts’ data sets.

3.5 Integration Before Exploitation: From Correlation to Discovery
The traditional intelligence cycle is a process often referred to as tasking, collection, processing, exploitation, and dissemination (TCPED).

TCPED is a familiar concept to intelligence professionals working in various technical disciplines who are responsible for making sense out of data in domains such as SIGINT and IMINT. Although often depicted as a cycle as shown in Figure 3.4, the process is also described as linear.

From a philosophical standpoint, TCPED makes several key assumptions:

• The ability to collect data is the scarcest resource, which implies that tasking is the most critical part of the data exploitation process. The first step of the process begins with tasking against a target, which assumes the target is known a priori.
• The most efficient way to derive knowledge in a single domain is through focused analysis of data, generally to the exclusion of specific contextual data.
• All data that is collected should be exploited and disseminated.

The limiting factor for CORONA missions was the number of images that could be taken by the satellite. In this model, tasking becomes supremely important: There are many more potential targets that can be imaged on a single roll of film. However, since satellite imaging in the CORONA era was a constrained exercise, processes were put in place to vet, validate, and rank-order tasking through an elaborate bureaucracy.

The other reality of phased exploitation is that it was a product of an adversary with signature and doctrine that, while not necessarily known, could be deduced or inferred over repeated observations. Large, conventional, doctrine-driven adversaries like the Soviet Union not only had large signatures, but their observable activities played out over a time scale that was easily captured by infrequent, scheduled revisit with satellites like CORONA. Although they developed advanced denial and deception techniques employed against imaging systems, both airborne and national, their large, observable activities were hard to hide.

But where is integration in this process? There is no “I,” big or small, in TCPED. Rather, integration was a subsequent step conducted very often by completely different analysts.

In today’s era of reduced observable signatures, fleeting enemies, and rapidly changing threat environments, integration after exploitation is seldom timely enough to provide decision advantage. The traditional concept of integration after exploitation, where finished reporting is only released when it exceeds the single-INT reporting threshold is shown in Figure 3.6. This approach not only suffers from a lack of timeliness but also is limited by the fact that only information deemed significant within a single-INT domain (without the contextual information provided by other INTs) is available for integration. For this reason, the single-INT workflows are often derisively referred to by intelligence professionals as “stovepipes” or as “stovepiped exploitation”.

While “raw” is a loaded term with specific meanings in certain disciplines and collection modalities, the theory is the same: The data you find yourself georeferencing, from any source you can get your hands on, is data that very often, has not made it into the formal intelligence report preparation and dissemination process. It is a very different kind of data, one for which the existing processes of TCPED and the intelligence cycle are inexactly tuned. Much of this information is well below the single-INT reporting threshold in Figure 3.6, but data neutrality tells us that while the individual pieces of information may not exceed the domain thresholds, the combined value of several pieces in an integrated review may not only exceed reporting thresholds but could reveal unique insight to a problem that would be otherwise undiscoverable to the analyst.

TCPED is a dated concept because of its inherent emphasis on the tasking and collection functions. The mindset that collection is a limited commodity influences and biases the gathering of information by requiring such analysts to decide a priori what is important. This is inconsistent with the goals of the ABI methodology. Instead, ABI offers a paradigm more suited to a world in which data has become not a scarcity, but a commodity: the relative de-emphasis of tasking collection versus a new emphasis on the tasking of analysis and exploitation

The result of being awash in data is that no longer can manual exploitation processes scale. New advances in collection systems like the constellation of small satellites proposed by Google’s Skybox will offer far more data than even a legion of trained imagery analysts could possibly exploit. There are several solutions to this problem of “drowning in data”:

• Collect less data (or perhaps, less irrelevant data and more relevant data);
• Integrate data earlier, using correlations to guide labor-intensive exploitation processes;
• Use smart technology to move techniques traditionally deemed “exploitation” into the “processing” stage.

These three solutions are not mutually exclusive, though note that the first two represent philosophically divergent viewpoints on the problem of data. ABI naturally chooses both the second and third solution. In fact, ABI is one of a small handful of approaches that actually becomes far more powerful as the represented data volume of activity increases because of the increased probability of correlations.

The analytic process emphasis in ABI also bears resemblance to the structured geospatial analytic method (SGAM), first posited by researchers at Penn State University

Foraging, then, is not only a process that analysts use but also a type of attitude that seeks to be embedded in the analytical mindset: The process of foraging is a continual one spanning not only specific lines of inquiry but also evolves beyond the boundaries of specific questions, turning the “foraging process” into a consistent quest for more data.

Another implication is precisely where in the data acquisition chain an ABI analyst should ideally be placed. Rather than putting integrated analysis at the end of the TCPED process, this concept argues for placing the analyst as close to the data collection point (or point of operational integration) as possible. While this differs greatly for tactical missions versus strategic missions, the result of placing the analyst as close to the data acquisition and processing components is clear: The analyst has additional opportunities not only to acquire new data but affect the acquisition and processing of data from the ground up, making more data available to the entire enterprise through his or her individual efforts.

3.6 Sequence Neutrality: Temporal Implications for Data Correlation
Sequence neutrality is perhaps the least understood and most complex of the pillars of ABI. The first three pillars are generally easily understood after a sentence or two of explanation (though they have deeper implications for the analytical process as we continually explore their meaning). Sequence neutrality, on the other hand, forces us to consider—and in many ways, reconsider—the implications of temporality with regard to causality and causal reasoning. As ABI moves data analysis to a world governed by correlation rather than causation, the specter of causation must be addressed.

In epistemology, this concept is described as narrative fallacy. Naseem Taleb, in his 2007 work Black Swan, explains it as “[addressing] our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense” [12]. What is important in Taleb’s statement is the concept of sequence: Events occur in order, and we weave a logical relationship around them.

As events happen in sequence, we chain them together even given our limited perspective on the accuracy with which those events represent reality. When assessing patterns—true patterns, not correlations—in single-source data sets, time proves to be a useful filter presuming that the percentage of the “full” data set represented remains relatively consistent. As we introduce additional datasets, the potential gaps multiply causing uncertainty to exponentially increase. In intelligence, as many data sets are acquired in an adversarial rather than cooperative fashion (as opposed to in traditional civil GIS approaches, or even crime mapping approaches), this concept becomes so important that it is given a name: sparse data.

You are integrating the data well before stovepiped exploitation and have created a data-neutral environment in which you can ask complex questions of the data. This enables and illuminates a key concept of sequence neutrality: The data itself drives the kinds of questions that you ask. In this way, we express a key component of sequence neutrality as “understanding that we have the answers to many questions we do not yet know to ask.”

The corollary to this realization is the importance of forensic correlation versus linear-forward correlation. If we have the answers to many questions in our spatial-temporal data environment, it then follows logically that the first place to search for answers—to search for correlations—is in the data environment we have already created. Since the data environment is based on what has already been collected, the resultant approach is necessarily forensic. Look backward, before looking forward.

From card catalogs and libraries we have moved to search algorithms and metadata, allowing us as analysts to quickly and efficiently employ a forensic, research-based approach to seeking correlations.

As software platforms evolved, more intuitive time-based filtering was employed, allowing analysts to easily “scroll through time.” As with many technological developments, however, there was also a less obvious downside related to narrative fallacy and event sequencing: The time slider allowed analysts to see temporally referenced data occur in sequence, reinforcing the belief that because certain events happened after other events, they may have been caused by them. It also made it easy to temporally search for patterns in data sets: useful again in single data sets, but potentially highly misleading in multisourced data sets due to the previously discussed sparse data problem. Sequence neutrality, then, is not only an expression of the forensic mindset but a statement of warning to the analyst to consider the value of sequenced versus nonsequenced approaches to analysis. Humans have an intuitive bias to see causality when there is only correlation. We caution against use of advanced analytic tools without the proper training and mindset adjustment.
3.6.1 Sequence Neutrality’s Focus on Metadata: Section 215 and the Bulk Telephony Metadata Program Under the USA Patriot Act

By positing that all data represents answers to certain questions, it implores us to collect and preserve the maximum amount of data as possible, limited only by storage space and cost. It also begs the creation of indexes within supermassive data sets, allowing us to zero in on key attributes of data that may only represent a fraction of the total data size.

A controversial provision of the USA PATRIOT act, Section 215, allows the director of the Federal Bureau of Investigation (or designee) to seek access to “certain business records” which may include “any tangible things (including books, records, papers, documents, and other items) for an investigation to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted sole upon the basis of activities protected by the first amendment to the Constitution”.

3.7 After Next: From Pillars, to Concepts, to Practical Applications
The pillars of ABI represent the core concepts, as derived by the first practitioners of ABI. Rather than a framework invented in a classroom, the pillars were based on the actual experiences of analysts in the field, working with real data against real mission sets. It was in this environment, forged by the demands of asymmetric warfare and enabled by accelerating technology, in which ABI emerged as one of the first examples of data-driven intelligence analysis approaches, focused primarily on spatial and temporal correlation as a means to discover.

The foraging-sensemaking, data-centric, sequence-neutral analysis paradigm of ABI conflicts with the linear- forward TCPED-centric approaches used in “tipping-cueing” constructs. The tip/cue concept slews (moves) sensors to observed or predicted activity based on forecasted sensor collection, accelerating warning on known knowns. This concept ignores the wealth of known data about the world in favor of simple additive constructs that if not carefully understood, risk biasing analysts with predetermined conclusions from arrayed collection systems

While some traditional practitioners are uncomfortable with the prospect of “releasing unfinished intelligence,” the ABI paradigm—awash in data—leverages the power of “everything happens somewhere” to discover the unknown. As a corollary, when many things happen in the same place and time, this is generally an indication of activity of interest. Correlation across multiple data sources improves our confidence in true positives and eliminates false positives.

4
The Lexicon of ABI

The development of ABI also included the development of a unique lexicon, terminology, and ontology to accompany it.

Activity data “comprises physical actions, behaviors, and information received about entities. The focus of analysis in ABI, activity is the overarching term used for ‘what entities do.’ Activity can be subdivided into two types based on its accompanying metadata and analytical use: events and transactions”.

4.1 Ontology for ABI
One of the challenges of intelligence approaches for the data-rich world that we now live in is integration of data.

As the diversity of data increased, analysts were confronted with the problem that most human analysts deal with today: How does one represent diverse data in a common way?

An ontology is the formal naming and documentation of interrelationships between concepts and terms in a discipline. Established fields like biology and telecommunications have well-established standards and ontologies. As the diversity of data and the scope of a discipline increases, so does the complexity of the ontology. If the ontology becomes too rigid and requires too many committee approvals to adapt to change, it cannot easily account for new data types that emerge as technology advances.
Moreover, with complex ontologies for data, complex environments are required for analysis, and it becomes extraordinarily difficult to correlate and connect data (to say nothing of conclusions derived from data correlations) in any environment other than a pen-to-paper notebook or a person’s mind.

4.2 Activity Data: “Things People Do”
The first core concept that “activity” data reinforces is the idea that ABI is ultimately about people, which, in ABI, we primarily refer to as “entities.”

Activity in ABI is information relating to “things people do.” While this is perhaps a simplistic explanation, it is important to the role of ABI. In ABI parlance, activities are not about places or equipment or objects.

4.2.1 “Activity” Versus “Activities”
The vernacular and book title use the term “activity-based intelligence,” but in early discussions, the phrase was “activities-based intelligence.” Activities are the differentiated, atomic, individual activities of entities (people). Activity is a broad construct to describe aggregated activities over space and time.

4.2.2 Events and Transactions

The definition in the introduction to this chapter defined activity data as “physical actions, behaviors, and information received about entities” but also divided activity data into two categories: events and transactions. These types are distinguished based on their metadata and utility for analysis. To limit the scope of the ABI ontology (translation: to avoid making an ontology that describes every possible action that could be performed by every possible type of entity), we specifically categorize all activity data into either an event or transaction based on the metadata that accompanies the data of interest.

a person living in a residence—provides a very different kind of event, one that is far less specific. While a residential address or location can also be considered biographical data, the fact of a person living in a specific place is treated as an event because of its spatial metadata component.
In all three examples, spatial metadata is the most important component.

The concept of analyzing georeferenced events is not specific to military or intelligence analysis. The GDELT project maintains a 100% free and open database of 300 kinds of events using data in over 100 languages with daily updates from January 1, 1979, to the present. The database contains over 400 million georeferenced data points

Characterization is an important concept because it can sometimes appear as if we are using events as a type of context. In this way, activities can characterize other activities. This is important because most activity conducted by entities does not occur in a vacuum; it occurs simultaneously with activities conducted by different entities that occur in either the same place or time—and sometimes both.

Events that occur in close proximity provide us an indirect way to relate entities together based on individual data points. There is, however, a more direct way to relate entities together through the second type of activity data: transactions.

4.2.3 Transactions: Temporal Registration

Transactions in ABI provide us with our first form of data that directly relates entities. A transaction is defined as “an exchange of information between entities (through the observation of proxies) and has a finite beginning and end”. This exchange of information is essentially the instantaneous expression of a relationship between two entities. This relationship can take many forms, but it exists for at least the duration of the transaction.

Transactions are of supreme importance in ABI because they represent relationships between entities. Transactions are typically observed between proxies, or representations of entities, and are therefore indirect representations of the entities themselves.
For example, police performing a stakeout of a suspect’s home may not observe the entity of interest, but they may follow his or her vehicle. The vehicle is a proxy. The departure from the home is an event. The origin-destination motion of the vehicle is a transaction. Analysts use transactions to connect entities and locations together, depending on the type of transaction.

Transactions come in two major subtypes: physical transactions and logical transactions. Physical transactions are exchanges that occur primarily in physical space, or, in other words, the real world.

Logical transactions represent the other major subtype of transaction. These types of transactions are easier to join directly to proxies for entities (and therefore, the entities themselves) because the actual transaction occurs in cyberspace as opposed to physical space.

4.2.4 Event or Transaction? The Answer is (Sometimes) Yes

Defining data as either an event or transaction is as much a function of understanding its role in the analytical process as much as it is about recognizing present metadata fields and “binning” it into one of two large groups. Consequently, there are certain data types that can be treated as both events and transactions depending on the circumstances and analytical use.

4.3 Contextual Data: Providing the Backdrop to Understand Activity

One of the important points to understand with regard to activity data is that its full meaning is often unintelligible without understanding the context in which observed activity is occurring.

“Contextualization is crucial in transforming senseless data into real information”

Activity data in ABI is the same: To understand it fully, we must understand the context in which it occurs, and context is a kind of data all unto itself.
There are many different kinds of contextual data.

Activity data and contextual data help understand the nature of events and transactions—and sometimes even to anticipate what might happen.

4.4 Biographical Data: Attributes of Entities

Biographical data provides information about an entity: name, age, date of birth, and other similar attributes. Because ABI is as much about entities as it is about activity, considering the types of data that apply specifically to entities is extremely important. Biographical data provides analysts with context to understand the meaning of activity conducted between entities.

the process of entity resolution (fundamentally, disambiguation) enables us to understand additional biographical information about entities.

Police departments, intelligence agencies, and even private organizations have long desired to understand specific details about individuals; therefore, what is it that makes ABI a fundamentally different analytic methodology? The answer is in the relationship of this biographical data to events and transactions described in Sections 4.2.2–4.2.4 and the fusion of different data types across the ABI ontology at speed and scale.

Unlike in more traditional approaches, wherein analysts might start with an individual of interest and attempt to “fill out” the baseball card, ABI starts with the events and transactions (activity) of many entities, ultimately attempting to narrow down to specific individuals of interest. This is one of the techniques that ABI uses to conquer the problem of unknown individuals in a network, which guards against the possibility that the most important entities might be ones that are completely unknown to individual analysts.

The final piece of the “puzzle” of ABI’s data ontology is relating entities to each other—but unlike transactions, we begin to understand generalized links and macronetworks. Fundamentally, this is relational data.

4.5 Relational Data: Networks of Entities
Entities do not exist in vacuums.

considering the context of relationships between entities is also of extreme importance in ABI. Relational data tells us about the entity’s relationships to other entities, through formal and informal institutions, social networks, and other means.

Initially, it is difficult to differentiate relational data from transaction data. Both data types are fundamentally about relating entities together; what, therefore, is the difference between the two?

The answer is that one type— transactions—represents specific expressions of a relationship, while the other type— relational data—is the generalized data based on both biographical data and activity data relevant to specific entities.

The importance of understanding general relationships between entities cannot be overstated; it is one of several effective ways to contextualize specific expressions of relationships in the form of transactions. Traditionally, this process would be to simply use specific data to form general conclusions (an inductive process, explored in Chapter 5). In ABI, however, deductive and abductive processes are preferred (whereby the general informs our evaluation of the specific). In the context of events and transactions, our understanding of the relational networks pertinent to two or more entities can help us determine whether connections between events and transactions are the product of mere coincidence (or density of activity in a given environment) or the product of a relationship between individuals or networks.

SNA can be an important complementary approach to ABI, but each focuses on different aspects of data and seeks a fundamentally different outcome, indicating that the two are not duplicative approaches.

What ABI and SNA share, however, is an appreciation for the importance of understanding entities and relationships as a method for answering particular types of questions.

4.6 Analytical and Technological Implications

Relational and biographical information regarding entities is supremely important for contextualizing events and transactions, but unlike earlier approaches to analysis and traditional manhunting, focusing on specific entities from the outset is not the hallmark innovation of ABI.

5
Analytical Methods and ABI

Over the past five years, the intelligence community and the analytic corps have adopted the term ABI and ABI- like principles into their analytic workflows. While the methods have easily been adapted by those new to the field —especially those “digital natives” with strong analytic credentials from their everyday lives—traditionalists have been confused about the nature of this intelligence revolution.

5.1 Revisiting the Modern Intelligence Framework

John Hollister Hedley, a long-serving CIA officer and editor of the President’s Daily Brief (PDB) outlines three broad categories of intelligence: 1) strategic or “estimative” intelligence; 2) current intelligence, and 3) basic intelligence

“finished” intelligence continues to be the frame around which much of today’s intelligence literature is constructed.

our existing intelligence framework needs expansion to account for ABI and other methodologies sharing similar intellectual approaches.

5.2 The Case for Discovery
In an increasingly data-driven world, the possibility of analytical methods that do not square with our existing categories of intelligence seems inevitable. The authors argue that ABI is the first of potentially many methods that belong in this category, which can be broadly labeled as “discovery,” sitting equally alongside current intelligence, strategic intelligence, and basic intelligence.

What characterizes discovery? Most intelligence analysts, many of whom are naturally inquisitive, already conduct aspects of discovery instinctively as they go about their day-to-day jobs. But there has been a growing chorus of concerns from both the analytical community and IC leadership that intelligence production has become increasingly driven by specific tasks and questions posed by policymakers and warfighters. In part, this is understandable: If policymakers and warfighters are the two major customer sets served by intelligence agencies, then it is natural for these agencies to be responsive to the perceived or articulated needs of those customers. However, need responsiveness does not encompass the potential to identify correlations and issues previously unknown or poorly understood by consumers of intelligence production. This is where discovery comes in: the category of intelligence primarily focused on identifying relevant and previously unknown potential information to provide decision advantage in the absence of specific requirements to do so.

institutional innovation often assumes (implicitly) a desire to innovate equally distributed across a given employee population. This egalitarian model of innovation, however, is belied by actual research showing that creativity is more concentrated in certain segments of the population

If “discovery” in intelligence is similar to “innovation” in technology, one consequence is that the desire to perform—and success at performing— “discovery” is unequally distributed across the population of intelligence analysts, and that different analysts will want to (and be able to) spend different amounts of time on “discovery.” Innovation is about finding new things based on a broad understanding of needs but lacking specific subtasks or requirements

ABI is one set of methods under the broad heading of discovery, but other methods—some familiar to the world of big data—also fit in the heading. ABI’s focus on spatial and temporal correlation for entity resolution through disambiguation is a specific set of methods designed for the specific problem of human networks

data neutrality’s application puts information gathered from open sources and social media up against information collected from clandestine and technical means. Rather than biasing analysis in favor of traditional sources of intelligence data, social media data is brought into the fold without establishing a separate exploitation workflow.

One of the criticisms of the Director of National Intelligence (DNI) Open Source Center, and the creation of OSINT as another domain of intelligence, was that it effectively served to create another stovepipe within the intelligence world,

ABI’s successes came from partnering, not replacing, single-INT analysts in battlefield tactical operations centers (TOCs).

The all-source analysis field is more typically (though not always) focused on higher-order judgments and adversary intentions; it effectively operates at a level of abstraction above both ABI and single-INT exploitation.

This is most evident in approaches to strategic issues dealing with state actors; all-source analysis seeks to provide a comprehensive understanding of current issues enabling intelligent forecasting of future events, while ABI focuses on entity resolution through disambiguation (using identical methodological approaches found on the counterinsurgency/counterterrorism battlefield) relevant to the very same state actors.

5.4 Decomposing an Intelligence Problem for ABI

One of the critical aspects of properly applying ABI is about asking the “right” questions. In essence, the challenge is to decompose a high-level intelligence problem into a series of subproblems, often posed as questions, that can potentially be answered using ABI methods.

As ABI focuses on disambiguation of entities, the problem must be decomposed to a level where disambiguation of particular entities helps fill intelligence gaps relating to the near-peer state power. As subproblems are identified, approaches or methods to address the specific subproblems are aligned to each subproblem in turn, creating an overall approach for tackling the larger intelligence problem. In this case, ABI does not become directly applicable to the overall intelligence problem until the subproblem specifically dealing with the pattern of life of a group of entities is extracted from the larger problem.

Another example problem to which ABI would be applicable is identifying unknown entities outside of formal leadership structures who may be key influencers outside of the given hierarchy through analyzing entities present at a location known to be associated with high-level leadership of the near-peer state.

5.5 The W3 Approaches: Locations Connected Through People and People Connected Through Locations
Once immersed in a multi-INT spatial data environment, there are two major approaches used in ABI to establish network knowledge and connect entities . These two approaches are summarized below, both dealing with connecting entities and locations. Together they are known as “W3” approaches, combining “who” and “where” to extend analyst knowledge of social and physical networks.

5.5.1 Relating Entities Through Common Locations
This approach focuses on connecting entities based on presence at common locations. Analysis begins with a known entity and then moves to identifying other entities present at the same location.

The process for evaluating strength of relationship based on locational proximity and type of location relies on the concepts of durability and discreteness, a concept further explored in Chapter 7. Colloquially, this process is known as “who-where-who,” and it is primarily focused on building out logical networks.

A perfect example of building out logical networks through locations begins with two entities—people, unique individuals—observed at a private residence on multiple occasions. In a spatial data environment, the presence of two entities at the same location at multiple points in time might bear investigation into the various attributes of those entities. The research process initially might show no apparent connection between them, but by continuing to understand various aspects of the entities, the locational connection may be corroborated and “confirmed” via the respective attributes of the entities. This could take many forms, including common social contacts, family members, and employers.

The easiest way to step through “who-where-who” is through a series of four questions. These questions offer an analyst the ability to logically step through a potential relationship through the colocation of individual entities. The first question is: “What is the entity or group of entities of interest?” This is often expressed as a simple “who” in shorthand, but the focus here is in identifying a specific entity or group of entities that are of interest to the analyst. Note that while ABI’s origins are in counterterrorism and thus, the search for “hostile entities,” the entities of interest could also be neutral or friendly entities, depending on what kind of organization the analyst is a part of.

In practice, this phase will consist of using known entities of interest and examining locations where the entities have been present. This process can often lead to constructing a full “pattern of life” for one or more specific entities, but it can also be as simple as identifying locations where entities were located on one or more specific occasions

The first question is: “What is the entity or group of entities of interest?” This is often expressed as a simple “who” in shorthand, but the focus here is in identifying a specific entity or group of entities that are of interest to the analyst.

In practice, this phase will consist of using known entities of interest and examining locations where the entities have been present. This process can often lead to constructing a full “pattern of life” for one or more specific entities, but it can also be as simple as identifying locations where entities were located on one or more specific occasions
The second question is: “Where has this entity been observed?” At this point, focus is on the spatial-temporal data environment. The goal here is to establish various locations where the entity was present along with as precise a time as possible.

The third question is: “What other entities have also been observed at these locations?” This is perhaps the most important of the four questions in this process. Here, the goal is to identify entities co-occurring with the entity or entities of interest. The focus is on spatial co-occurrence, ideally over multiple locations. This intuitive point— more co-occurrences increases the likelihood of a true correlation—is present in the math used to describe a linear correlation function:

the characteristics of each location considered must be evaluated in order to separate out “chance co-occurrences” versus “demonstrative co-occurrences.” In addition, referring back to the pillar of sequence neutrality, it is vitally important to consider the potential for co-occurrences that are temporally separated. This often occurs when networks of entities change their membership but use consistent locations for their activities, as is the case with many clubs and societies.

The fourth and final question is: “Is locational proximity indicative of some kind of relationship between the initial entity and the discovered entity?”
the goal is to take an existing network of entities and identify additional entities that may have been partially known or completely unknown. The overwhelming majority of entities must interact with each other, particularly to achieve common goals, and this analytic technique helps identify entities that are related based on common locations before metadata or attribute- based explicit relationships.

5.5.2 Relating Locations Through Common Entities
This approach is the inverse of the previous approach and focuses on connecting locations based on the presence of common entities. By tracking entities to multiple locations, connections between locations can be revealed.

Where the previous process is focused on building out logical networks where entities are the nodes, this process focuses on building out either logical or physical networks where locations are the nodes. While at first this can seem less relevant to a methodology focused on understanding networks of entities, understanding the physical network of locations helps indirectly reveal information about entities who use physical locations for various means (nefarious and nonnefarious alike).

The first question asked in this process is, “What is the initial location or locations of interest?” This is the most deceptively difficult question to answer, because it involves bounding the initial area of interest.

The next question brings us back to entities: “What entities have been observed at this location?” Whether considering one or more locations, this is where specific entities can be identified or partially known entities can be identified for further research. This is one of the core differences between the two approaches, in that there is no explicit a priori assumption regarding the entities of interest. This question is where pure “entity discovery” occurs, as focusing on locations allows entities not discovered through traditional, relational searches to emerge as potentially relevant players in multiple networks of interest.

The third question is, “Where else have these entities been observed?” This is where a network of related locations is principally constructed. Based on the entities—or networks—discovered in the previous phase of research, the goal is now to associate additional, previously unknown locations based on common entities.

One of the principal uses of this information is to identify locations that share a common group of entities. In limited cases, this approach can be predictive, indicating locations that entities may be associated with even if they have not yet been observed at a given location.

The final question is thus, “Is the presence of common entities indicative of a relationship between locations?”
Discovering correlation between entities and locations is only the first step, as subsequently contextual information must be examined dispassionately to support or refute the hypothesis suggested by entity commonality.

At this point, the assessment aspect of both methods must be discussed. By separating what is “known” to be true versus what is “believed” to be true, analysts can attempt to provide maximum value to intelligence customers.

5.6 Assessments: What Is Known Versus What Is Believed

At the end of both methods is an assessment question: Has the process of moving from vast amounts of data to specific data about entities and locations provided correlations that demonstrate actual relationships between entities and/or locations?

Correlation versus causation can quickly become a problem in the assessment phase, as well as the role of chance in spatial or temporal correlation of data. The assessment phase of each method is designed to help analysts separate out random chance from relevant relationships in the data.

ABI adapts new terminology from a classic problem of intelligence assessments, which is separating fact from belief.

Particularly with assessments that rest on correlations present across several degrees of data, the potential for alternative explanations must always be considered. While the concepts themselves are common across intelligence methodologies, these are of paramount importance in properly understanding and assessing the “information” created through assessment of correlated data.

Abduction, perhaps the least known in popular culture, represents the most relevant form of inferential reasoning for the ABI analyst. It is also the form of reasoning most commonly employed by Sir Arthur Conan Doyle’s Sherlock Holmes, despite references to Holmes as the master of deduction. Abduction can be thought of as “inference to the best explanation,” where rather than a conclusion guaranteed by the premises, the conclusion is expressed as a “best guess” based on background knowledge and specific observations.

5.7 Facts: What Is Known

Allowance must be made for uncertainty even in the identification of facts; even narrowly scoped, facts can turn out to be untrue for a variety of reasons. Despite this tension, distinguishing between facts and assessments is a useful mental exercise. It also serves to introduce the concept of a key assumption check (KAC) into ABI, as what ABI terms “facts” overlaps some with what other intelligence methodologies term “assumptions.”
Another useful way to conceptualize facts is “information as reported from the primary source.”

5.8 Assessments: What Is Believed or “Thought”

Assessment is where the specific becomes general. Assessment is one of the key functions performed by intelligence analysts, and it is one of very few common attributes across functions, echelons, and types of analysts. It is also not, strictly speaking, the sole province of ABI.

5.9 Gaps: What Is Unknown

ABI identifies correlated data based on spatial and temporal co-occurrence, but it does not explicitly seek to assign meaning to the correlation or place it in a larger context.

There are times, however, when the method cannot even reach assessment level due to “getting stuck” during research of spatial and temporal correlations. This is where the concept of “unfinished threads’ becomes vitally important.

5.9 Gaps: What Is Unknown
The last piece of the assessment puzzle is “gaps.” This is in many ways the inverse of “facts” and can inform assessments as much as “facts” can. Gaps, like facts, must be stated as narrowly and explicitly as possible in order to identify areas for further research or where the collection of additional data is required.

Gap identification is a crucial skill for most analytic methods because of natural human tendencies to either ignore contradictory information or construct narratives that explain incomplete or missing information.

5.10 Unfinished Threads

Every time a prior initiates one or both of the principal methods discussed earlier in this chapter, an investigation begins.

True to its place in “discovery intelligence,” ABI not only makes allowances for the existence of these unfinished threads, it explicitly generates techniques to address these threads and uses them for the maximum benefit of the analytical process.

Unfinished threads are important for several reasons. First, they represent the institutionalization of the discovery process within ABI. Rather than force a process by which a finished product must be generated, ABI allows for the analyst to pause and even walk away from a particular line of inquiry for any number of reasons. Second, unfinished threads can at times lead an analyst into parallel or even completely unrelated threads that are as important, or even more important, than the initial thread. This process, called “thread hopping,” is one expression of a nonlinear workflow inside of ABI.

One of the most challenging problems presented by unfinished threads is preserving threads for later investigation. Methods for doing so are both technical (computer software designed to preserve these threads, discussed further in Chapter 15) and nontechnical, such as scrap paper, whiteboards, and pen-and-paper notebooks. This is particularly important when new information arrives, especially when the investigating analyst did not specifically request the new information.

By maintaining a discovery mindset and continuing to explore threads from various different sources of information, the full power of ABI—combined with the art and intuition present in the best analysts— can be realized.

5.11 Leaving Room for Art And Intuition
One of the hardest challenges for structured approaches to intelligence analysis is carving out a place for human intuition and, indeed, a bit of artistry. The difficulty of describing and near impossibility of teaching intuition make it tempting to omit it from any discussion of analytic methods in an effort to focus on that which is teachable. To do so, however, would be both unrealistic as well as a disservice to the critical role that intuition— properly understood and subject to appropriate —can play in the analytic process.

Interpretation is an innate, universal, and quintessentially intuitive human faculty. It is field-specific, in the sense that one’s being good at interpreting, say, faces or pictures or modern poetry does not guarantee success at interpreting contracts or statues. It is not a rule-bound activity, and the reason a judge is likely to be a better interpreter of a statute than of a poem, and a literary critic a better interpreter of a poem than a statute, is the experience creates a repository of buried knowledge on which intuition can draw when one is faced with a new interpretandum – Judge Richard Posner

At all times, however, these intuitions must be subject to rigorous scrutiny and cross-checking, to ensure their validity is supported by evidence and that alternative or “chance” explanations cannot also account for the spatial or temporal connections in data.
Fundamentally, there is a role for structured thinking about problems, application of documented techniques, and artistry and intuition when examining correlations in spatial and temporal data. Practice in these techniques and practical application that builds experience are both equally valuable in developing the skills of an ABI practitioner.

6

Disambiguation and Entity Resolution

Entity resolution or disambiguation through multi-INT correlation is a primary function of ABI. Entities and their activities, however, are rarely directly observable across multiple phenomenologies. Thus, we need an approach that considers proxies —indirect representations of entities—which are often directly observable through various means.

6.1 A World of Proxies

As entities are a central focus of ABI, all of an entity’s attributes are potentially relevant to the analytical process. That said, a subset of attributes called proxies is the focus of analysis as described in Chapter 5. A proxy “is an observable identifier used as a substitute for an entity, limited by its durability (i.e., influenced by the entity’s ability to change/alter proxies)”

Focusing on any particular, “average” entity results in a manageable number of proxies.2 However, beginning with a given entity is fundamentally a problem of “knowns.” How can an analyst identify an “unknown entity?”

Now the problem becomes more acute. Without using a given entity to filter potential proxies, all proxies must be considered; this number is likely very large and for the purposes of this chapter is n. The challenge that ABI’s spatio-temporal methodology confronts is going from n, or all proxies, to a subset of n that relates to an individual or group of individuals. In some cases, n can be as limited as a single proxy. The process of moving from n to the subset of n is called disambiguation.

6.2 Disambiguation

Disambiguation is not a concept unique to ABI. Indeed, it is something most people do every day in a variety of settings, for a variety of different reasons. A simple example of disambiguation is using facial features to disambiguate between two different people. This basic staple of human interaction is so important that an inability to do so is a named disorder—prosopagnosia.

Disambiguation is a conceptually simple process; accordingly, the actual process of disambiguation is severely complicated by incomplete, erroneous, misleading, or insufficiently specific data.

Without discounting the utility of more “general” proxies like appearance and clothing and vehicle types, it is the “unique” identifiers that offer the most probative value in the process of disambiguation and that, ultimately, are most useful in achieving the ultimate goal: entity resolution.

6.3 Unique Identifiers—“Better” Proxies

To understand fully why unique identifiers are of such importance to the analytical process in ABI, a concept extending the data ontology of “events” and “transactions” from Chapter 4 must be introduced. This concept is called certainty of identity.

This concept has a direct analog in the computing world—the universal unique identifier (UUID) or globally unique identifier (GUID) [3, 4]. In distributed computing environments—linking together disparate databases— UUIDs or GUIDs are the mechanism to disambiguate objects in the computing environments [4]. This is done against the backdrop of massive data stores from various different sources in the computing and database world.
In ABI, the same concept is applied to the “world’s” spatiotemporal data store: Space and time provide the functions to associate unique identifiers (proxies) with each other and with entities. The proxies can then be used to identify the same entity across multiple data sources, allowing for a highly accurate understanding of an entity’s movement and therefore behavior.

6.4 Resolving the Entity

As the core of ABI’s analytic methodology revolves around discovering entities through spatial and temporal correlations in large data sets across multiple INTs, the process of entity resolution principally through spatial and temporal attributes is the defining attribute of ABI’s analytical methodology and represents ABI’s enduring contribution to the overall discipline of intelligence analysis.

Entity resolution is “the iterative and additive process of uniquely identifying and characterizing an [entity], known or unknown, through the process of correlating event/transaction data generated by proxies to the [entity]”.

Entity resolution itself is not unique to ABI. Data mining and database efforts in computer science focus intense amounts of effort on entity resolution. These efforts are known by a number of different terms (e.g., record linkage, de-duplication, and co-reference resolution), but all focus on “the problem of extracting, matching, and resolving entity mentions in structured and unstructured data”.

In ABI, “entity mentions” are contained within activity data. This encompasses both events and transactions, as both can involve a specific detection of a proxy. As shown in Figure 6.4, transactions always involve proxies at the endpoints, or “vertices” of the transaction. Events also provide proxies, but these can range from general (example, a georeferenced report stating that a particular house is an entity’s residence) to highly specific (a time-stamped detection of a radio-frequency identification tag at a given location).

6.5 Two Basic Types of Entity Resolution

Ultimately, the process of entity resolution can be broken into two categories: proxy-to-entity resolution and proxy-to-proxy resolution. Both types have specific use cases in ABI and can provide valuable information pertinent to an entity of interest, ultimately helping answer intelligence questions.

6.5.1 Proxy-to-Proxy Resolution

Proxy-to-proxy resolution through spatiotemporal correlation is not just an important aspect of ABI; it is one of the defining concepts of ABI. But why is this? At face value, entity resolution is ultimately the goal of ABI. Therefore, how does resolving one proxy to another proxy help advance understanding of an entity and relate it to its relevant proxies?

The answer is found at the beginning of this chapter: entities cannot be directly observed. Therefore, any kind of resolution must by definition be relating one proxy to another proxy, through space and time and across multiple domains of information.

What the various sizes of circles introduce is the concept of CEP (Figure 6.5). CEP was originally introduced as a measure of accuracy in ballistics, representing the radius of the circle within which 50% of “rounds” or “warheads” were expected to fall. A smaller CEP indicated a more accurate weapon system. This concept has been expanded to represent the accuracy of geolocation of any item (not just a shell or round from a weapons system), particularly with the proliferation of GPS-based locations [9]. Even systems such as GPS, which are designed to provide accurate geolocations, have some degree of error.

This simple example illustrates the power of multiple observations over space and time for proper disambiguation and resolving proxies from one data source to proxies from another data source. This was a simplistic thought experiment. The bounds were clearly defined, and there was a 1:1 ratio of Vehicles:Unique Identifiers, both of which were of a known quantity (four each). Real-world conditions and problems will rarely present such clean results for an analyst or for a computer algorithm. The methods and techniques for entity disambiguation over space and time have been extensively researched over the past 30 years by the multisensor data fusion community.

6.5.2 Proxy-to-Entity Resolution: Indexing

While proxy-to-proxy resolution is at the heart of ABI, the importance of proxy-to-entity resolution, or indexing, cannot be overstated. Indexing is a broad term used for various processes, most outside the strict bounds of ABI, that help link proxies to entities through a variety of technical and nontechnical means. Indexing takes placed based on values within single information sources (rather than across them) and is often done in the process of focused exploitation on a single source or type of data.

Indexing is essentially an automated way of helping analysts build up an understanding of attributes of an entity. In intelligence, this often focuses around a particular collection mechanism or phenomenology; the same is true with regard to law enforcement and public records, where vehicle registration databases, RFID toll road passes, and other useful information is binned according to data class and searchable using relational keys. While not directly a part of the ABI analytic process, access to these databases provides analysts with an important advantage in determining potential entities to resolve to proxies in the environment.

6.6 Iterative Resolution and Limitations on Entity Resolution

Even the best proxies, however, have limitations. This is why we refer to the relevant items as proxies instead of signatures in ABI. A signature is something characteristic, indicative of identity. Most importantly, signatures have inherent meaning, typically detectable in a single phenomenology or domain of information. Proxies, however, lack the same inherent meaning, though in everyday use, the two are often conflated – This, however, is not always the case.

These challenges necessitate the key concept of iterative resolution in ABI; in essence, practitioners must consistently re-examine proxies to determine whether they are still valid for entities of interest. By revisiting Figure 6.2, it is intuitively clear that certain proxies are easier to change, while others are far more difficult. When deliberate operations security (OPSEC) practices are introduced from terrorists, insurgents, intelligence officers, and other entities who are trained in countersurveillance and counterintelligence efforts, it can be even more challenging to evaluate the validity of a given proxy for an individual at an individual point in time. These limits on connecting proxies to entities describe perhaps the most prominent challenges

These limits on connecting proxies to entities describe perhaps the most prominent challenges
to disambiguation and entity resolution amongst very similar proxies: the concept of discreteness, relative to physical location, and durability, relative to proxies. Together these capture the limitations of the modern world that are passed through to the analytical process underpinning ABI.

7

Discreteness and Durability in the Analytical Process

The two most important factors in ABI analysis are the discreteness of locations and durability of proxies. For shorthand, these two concepts are often referred to simply as discreteness and durability. Discreteness of locations deals with the different properties of physical locations, focusing on the use of particular locations by entities and groups of entities that can be expected to interact with given locations, taking into account factors like climate, time of day, and cultural norms. Durability of proxies addresses an entity’s ability to change or alter given proxies and therefore, the analyst’s need to periodically revalidate or reconfirm the validity of a given proxy for an entity of interest.

7.1 Real World Limits of Disambiguation and Entity Resolution

Discreteness and durability are designed as umbrella terms: They help express the real-world limits of an analyst’s ability to disambiguate unique identifiers through space and time and ultimately, match proxies to entities and thereby perform entity resolution. They also present the two greatest challenges to attempts to automate the core precepts of ABI: Because the concepts are “fuzzy,” and there are no agreed-upon standards or scales used to express discreteness and durability, automating the resolution process remains a monumental challenge. This section illustrates general concepts with an eye toward use by human analysts.
disambiguation and entity resolution amongst very similar proxies: the concept of discreteness, relative to physical location, and durability, relative to proxies. Together these capture the limitations of the modern world that are passed through to the analytical process underpinning ABI.

7.2 Applying Discreteness to Space-Time

Understanding the application of discreteness (of location) to space-time begins with revisiting the concept of disambiguation.

Disambiguation is one of the most important processes for both algorithms and human beings, and one of the major challenges involves assigning confidence values (either qualitative or quantitative) to the results of disambiguation, particularly with respect to the character of given locations, geographic regions, or even particular structures.

But why does the character of a location matter? The answer is simple, even intuitive: Not all people, or entities, can access all locations, regions, or buildings. Thus, when discussing the discreteness value of a given location, whether it is measured qualitatively or quantitatively, the measure is always relative to an entity or group/network of entities.

Considering that the process of disambiguation begins with the full set of “all entities,” the ability to subsequently narrow the potential pool of entities generating observable proxies in a given location based on the entities who would have natural access to a given location can be an extraordinarily powerful tool in the analysis process.

ABI’s analytic process uses a simple spectrum to describe the general nature of given locations. This spectrum provides a starting point for more complex analyses, but the significant gap of a detailed quantitative framework to describe the complexity of locations remains. This is an open area for research and one of ABI’s true hard problems.

7.3 A Spectrum for Describing Locational Discreteness

In keeping with ABI’s development as a grassroots effort among intelligence analysts confronted with novel problems, a basic spectrum is used to divide locations into three categories of discreteness:

• Non-discrete

• Discrete

• Semi-discrete

The categories of discreteness are temporally sensitive, representing the dynamic and changing use of locations, facilities, and buildings on a daily, sometimes even hourly, basis. Culture, norms, and local customs all factor into the analytical “discreteness value” that aids ABI practitioners in evaluating the diagnosticity of a potential proxy-entity pair.

Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of the various hypotheses. If an item of evidence seems consistent with all hypotheses, it may have no diagnostic value at all. It is a common experience to discover that the most available evidence really is not very helpful, as it can be reconciled with all the hypotheses.

This concept can be directly applied to disambiguation among proxies and resolving proxies to entities. Two critical questions are used to evaluate locational discreteness—the diagnosticity—of a given proxy observation. The first question is, “How many other proxies are present in this location and therefore may be resolved to entities through spatial co-occurrence?” This addresses the disambiguation function of ABI’s methodology. The second question is, “What is the likelihood that the presence of a given proxy at this location represents a portion of unique entity behavior?”

Despite these difficulties, multiple proxy observations over space and time (even at nondiscrete locations) can be chained together to produce the same kind of entity resolution [1]. An analyst would likely need additional observations at nondiscrete locations to provide increased confidence in an entity’s relationship to a location or to resolve an unresolved proxy to a given entity.

A discrete location is a location that is unique to an entity or network of entities at a given time. Observations of proxies at discrete locations, therefore, are far more diagnostic in nature because they are restricted to a highly limited entity network. The paramount example of a discrete location is a private residence.

Revisiting the two principal questions from above, the following characteristics emerge regarding a discrete location:

• Proxies present at a private residence can be associated with a small network of entities, the majority of whom are connected through direct relationships to the entity or entities residing at the location;

• Entities present at this location can presumptively be associated with the group of entities for whom the location is a principal residence.

As discussed earlier, discrete locations can be far from perfect.

7.4 Discreteness and Temporal Sensitivity

Temporal sensitivity with respect to discreteness is a concept used to describe how the use of locations by entities (and therefore the associated discreteness values) changes over time; the change in function affects a change in the associated discreteness. While this may seem quite abstract, it is actually a concept many are comfortable with from an early age.

when viewed at the macro level, the daily and subdaily variance in activity levels across multiple entities is referred to as a pattern of activity,

7.5 Durability of Proxy-Entity Associations

The durability of proxies remains the other major factor contributing to the difficulty of disambiguation and entity resolution

Though many proxies can (and are often) associated with single entities, these associations can range from nearly permanent to extraordinarily fleeting. The concept of durability represents the range of potential values for the duration of time of the proxy-entity association.

Answering “who-where-who” and “where-who-where” workflow questions becomes exponentially more difficult when varying degrees of uncertainty in spatial-temporal correlation are introduced by the two major factors discussed in this chapter. Accordingly, structured approaches for analysts to consider the effects of discreteness and durability are highly recommended, particularly as supporting material to overall analytical judgments.

One continuous recommendation in all types of intelligence analysis is that assumptions made in the analytical process should be made explicit, so that intelligence consumers can understand what is being assumed, what is being assessed, and how assessments might change based on changes in the underlying assumptions presented by an analyst [2, pp. 9, 16]. One recommended technique is using a matrix during the analytic process to make explicit discreteness and durability factors in an effort to incorporate them into the overall judgments and conclusions. In addition, the use of a matrix can provide key values that can later be used to develop quantitative expressions of uncertainty, but these expressions are fundamentally meaningless without the underlying quantifications clearly expressed (in essence, creating a “values index” so that the overall quantified value can be properly contextualized).

Above all, analysts must be continually encouraged by their leadership and intelligence consumers to clearly express uncertainty and “show their work.” Revealing flaws and weaknesses in a logical assessment are unfortunately often perceived as weakness, and this tendency is reinforced by consumers that attack probabilistic assessments and express desires for stronger, “less ambiguous” results of analyses. The limitations of all analytic methodologies must be expressed, but in ABI this becomes a particularly important point.

8

Patterns of Life and Activity Patterns

8.1 Entities and Patterns of Life

“Pattern(s) of life,” like many concepts in and around ABI, suffers from an inadequate written literature and varying applications depending on the speaker or writer’s context.

These concepts are familiar to law enforcement officers, who through direct surveillance techniques have constructed patterns of life on suspects for many years. One of the challenges in ABI explored in Section 8.2 is the use of indirect, sparse observations to construct entity patterns of life.

With discreteness, the varying uses of geographic locations over days, weeks, months, and even years is examined as part of the analytical process for ABI. Patterns of life are a related concept: A pattern of life is defined as the specific set of behaviors and movements associated with a particular entity over a given period of time. In simple terms, this is what people do everyday:

.At times, the term “pattern of life” has been used to describe behaviors associated with a specific object (for instance, a ship) as well as to describe the behaviors and activity observed in a particular location or region. An example would be criminal investigators staking out a suspect’s residence: They would learn the various comings and goings of many different entities, and see various activities taking place at the residence. In essence, they are observing small portions of individual patterns of life from many different entities, but the totality of this activity is sometimes also described in the same way.

One truth about patterns of life is that they cannot be observed or fully understood through periodic observations.

In sum, four important principles emerge regarding the formerly nebulous concept of “pattern of life”:

1. A pattern of life is specific to an individual entity;
2. Longer observations provide better insight into an entity’s overall pattern of life;
3. Even the lengthiest surveillance cannot observe the totality of an individual’s pattern of life;
4. Traditional means of information gathering and intelligence collection reveal only a snapshot of an entity’s pattern of life.

While it can be tempting to generalize or assume on the basis of what is observed, it is important to account for the possibilities during times in which an entity goes unobserved by technical or human collection mechanisms. In the context of law enforcement, the manpower cost of around-the-clock surveillance quickly emerges, and the need for officers to be reassigned to other tasks and investigate other crimes can quickly take precedence over the need to maintain surveillance on a given entity. Naturally, the advantage of technical collection versus human collection in terms of temporal persistence is evident.

Small pieces of a puzzle, however, are still useful. So too are different ways of measuring the day-to-day activities conducted by specific entities of interest (e.g., Internet usage, driving habits, and phone calls). Commercial marketers have long since taken advantage of this kind of data in order to more precisely target advertisements and directed sales pitches. However, these sub-aspects of an entity’s pattern of life are important in their own right and are the building blocks from which an overall pattern of life can be constructed.

8.2 Pattern-of-Life Elements

Pattern-of-life elements are the “building blocks” of a pattern of life. These elements can be measured in one or many different dimensions, each providing unique insight about entity behavior and ultimately contributing to a more complex overall picture of an entity. These elements can be broken down into two major categories:

• Partial observations, where the entity is observed for a fixed duration of time;

• Single-dimension measurements, where a particular aspect of behavior or activity is measured over time in order to provide insight into that specific dimension of entity behavior or activity.

The limitations of the sensor platform (resolution, spectrum, field of view) all play a role in the operator’s ability to assess whether the individual who emerged later was the same individual entity who entered the room, but even a high-confidence assessment is still an assessment, and there remains a nonzero chance that the entity of interest did not emerge from the room at all.

8.3 The Importance of Activity Patterns

activity patterns constructed from data sets containing multiple entities will not be effective tools for disambiguation.
Understanding the concept and implications of data aggregation is important in assessing both the utility and limitations of activity patterns. The first and most important rule of data aggregation is that aggregated data represents a summary of the original data set. Regardless of aggregation technique, no summary of data can (by definition) be as precise or accurate as the original set of data. Therefore, activity patterns constructed from data sets containing multiple entities will not be effective tools for disambiguation.

Effective disambiguation requires precise data, and summarized activity patterns cannot provide this.

If activity patterns—large sets of data containing summarized activity or movement from multiple entities—are not useful for disambiguation, why mention them at all in the context of ABI? There are two primary reasons.

One is that on a fairly frequent basis, activity patterns are mistakenly characterized as patterns of life without properly distinguishing the boundary between specific behavior of an individual and the general behaviors of a group of individuals [4, 5].

The second reason is that despite this confusion, activity patterns can play an important role in the analytical process: They provide an understanding of the larger context in which a specific activity occurs.

8.4 Normalcy and Intelligence

“Normal” or “abnormal” are descriptors that appear often in discussions regarding ABI. Examining the descriptions at a deeper level, however, reveals that these descriptors are often applied to activity pattern analysis, an approach to analysis distinct from ABI. The basis in logic works as follows:

• Understand and “baseline” what is normal;

• Alert when a change is made (thus, when “abnormal” occurs).

Cynthia Grabo, a former senior analyst at the Defense Intelligence Agency, defines warning intelligence as dealing with:
(a) direct action by hostile states against the U.S. or its allies, involving the commitment of their regular or irregular armed forces
(b) other developments, particularly conflicts, affecting U.S. security interests in which such hostile states are or might become involved
(c) significant military action between other nations not allied with the U.S., and
(d) the threat of terrorist action” [6].

Thus, warning is primarily concerned with what may happen in the future.

8.5 Representing Patterns of Life While Resolving Entities

Until this point, disambiguation/entity resolution and patterns of life have been discussed as separate concepts. In reality, however, the two processes often occur simultaneously. As analysts disambiguate proxies and ultimately resolve them to entities, pieces of an entity’s pattern of life are assembled. Once a proxy of interest is identified— even before entity resolution fully occurs—the process of monitoring a proxy creates observations: pattern-of-life elements.
8.5.1 Graph Representation

One of the most useful ways to preserve nonhierarchical information is in graph form. Rather than focus on specific technology, this section will describe briefly the concept of a graph representation and discuss benefits and drawbacks to the approach. Graphs have a number of advantages, but the single most relevant advantage is the ability to combine and represent relationships between data points from different sources.

8.5.2 Quantitative and Temporal Representation

With quantitative and temporal data, alternate views may be more appropriate. Here, traditional views of representing periodicity and aggregated activity patterns are ideal; this allows appropriate generalization across various time scales. Example uses of quantitative representation for single-dimensional measurements (a pattern-of-life element) include the representation of periodic activity.

Figure 8.5 is an example of how analysts can discern any potential correlations between activity levels and day of the week and make recommendations accordingly. This view of data would be considered a single-dimensional measurement, and thus a pattern of life element.

8.6 Enabling Action Through Patterns of Life

One important element missing from most discussions of pattern of life is “Why construct patterns of life at all?” Having an entity’s pattern of life, whether friendly, neutral, or hostile, is simply a means to an end, like all intelligence methodologies. The goal is not only to provide decision advantage at a strategic level but operational advantage at the tactical level.

Understanding events, transactions, and activity patterns also allows analysis to drive collection and identifies areas of significance where further collection operations can help reveal more information about previously hidden networks of entities. Patterns of life and pattern-of-life elements are just one representation of knowledge gained through the analytic process, ultimately contributing to overall decision advantage.

9

Incidental Collection

This chapter explores the concept of incidental collection by contrasting the change in the problem space: from Cold War–era order of battle to dynamic targets and human networks on the 21st century physical and virtual battlefields.

9.1 A Legacy of Targets

The modern intelligence system—in particular, technical intelligence collection capabilities—was constructed around a single adversary, the Soviet Union.

9.2 Bonus Collection from Known Targets

Incidental collection is a relatively new term, but it is not the first expression of the underlying concept. In imagery parlance, “bonus” collection has always been present, from the very first days of “standing target decks.” A simple example of this starts with a military garrison. The garrison might have several buildings for various purposes, including repair depots, vehicle storage, and barracks. In many cases, it might be located in the vicinity of a major population center, but with some separation depending on doctrine, geography, and other factors.

A satellite might periodically image this garrison, looking for vehicle movement, exercise starts, and other potentially significant activity. The garrison, however, only has an area of 5 km2, whereas the imaging satellite produces images that span almost 50 km by 10 km. The result, as shown in Figure 9.1, is that other locations outside of the garrison—the “primary target”—are captured on the image. This additional image area could include other structures, military targets, or locations of potential interest, all of which constitute “bonus” collection.
Incidental collection, rather than identifying a specific intelligence question as the requirement, focuses on the acquisition of large amounts data over a relevant spatial region or technical data type and sets volume of data obtained as a key metric of success. This helps address the looming problem of unknowns buried deep in activity data by maximizing the potential chances for spatiotemporal data correlations. Ultimately, this philosophy maximizes opportunities for proxy-entity pairing and entity resolution.

The Congressional Research Services concluded in 2013, “While the intelligence community is not entirely without its legacy ‘stovepipes,’ the challenge more than a decade after 9/11 is largely one of information overload, not information sharing. Analysts now face the task of connecting disparate, minute data points buried within large volumes of intelligence traffic shared between different intelligence agencies.

9.4 Dumpster Diving and Spatial Archive and Retrieval

In intelligence, collection is focused almost exclusively on the process of prioritizing and obtaining through technical means the data that should be available “next.” In other words, the focus is on what the satellite will collect tomorrow, as opposed to what it has already collected, from 10 years ago to 10 minutes ago. But vast troves of data are already collected, many of which are quickly triaged and then discarded as lacking intelligence value. ABI’s pillar of sequence neutrality emphasizes the importance of spatial correlations across breaks in time, so maintaining and maximizing utility from data already being collected for very different purposes is in effect a form of incidental collection.

Repurposing of existing data through application of ABI’s georeference to discover pillar is colloquially called “dumpster diving” by some analysts.

Repurposing data through the process of data conditioning (extracting spatial, temporal, and other key metadata features and indexing based on those features) is a form of incidental collection and is critical to ABI. This is because the information in many cases was collected to service-specific collection requirements and/or information needs but is then used to fill different information needs and generate new knowledge. Thus, the use of this repurposed data is incidental to the original collection intent. This process can be applied across all types of targeted, exquisite forms of intelligence. Individual data points, when aggregated into complete data sets, become incidentally collected data.

Trajectory Magazine wrote in its Winter 2012 issue, “A group of GEOINT analysts deployed to Iraq and Afghanistan began pulling intelligence disciplines together around the 2004–2006 timeframe…these analysts hit upon a concept called ‘geospatial multi-INT fusion.’” Analysts recognized that the one field that all data had in common was location.

9.5 Rethinking the Balance Between Tasking and Exploitation

Incidental collection has direct and disruptive implications for several pieces of the traditional TCPED cycle. The first and perhaps most significant is drastically re-examining the nature of the requirements and tasking process traditionally employed in most intelligence disciplines.

The current formal process for developing intelligence requirements was established after the Second World War, and remains largely in use today. It replaced an ad hoc, informal process of gathering intelligence and professionalized the business of developing requirements [7].

Like most formal intelligence processes, the legacy of Cold War intelligence requirements was tuned to the unique situation between 1945 and 1991, a bipolar contest between two major state adversaries: the United States and the Soviet Union. Thus the process was created with assumptions that, while true at the time, have become increasingly questionable in the unipolar world with numerous near-peer state competitors and increasingly powerful nonstate actors and organizations.
“satisficing”—collecting just enough that the requirement was fulfilled—required a clear understanding of the goals from the development of the requirement and management of the collection process. This, of course, meant that the information needs driving requirement generation, by definition, had to be clearly known, such that technical collection systems could be precisely tasked.

The shift of the modern era from clandestine and technical sensors to new, high-volume approaches to technical collection; wide-area and persistent sensors with long dwell times; and increasing use of massive volumes of information derived from open and commercial sources demands a parallel shift in emphasis of the tasking process. Because of the massive volumes of information gained from incidentally collected—or constructed—data sets, tasking is no longer the most important function. Rather, focusing increasingly taxed exploitation resources becomes critical; in addition, the careful application of automation to prepare data in an integrated fashion (performing functions like feature extraction, georeferencing, and semantic understanding) is necessary. “We must transition from a target-based, inductive approach to ISR that is centered on processing, exploitation, and dissemination to a problem-based, deductive, active, and anticipatory approach that focuses on end-to-end ISR operations,” according to Maj. Gen. John Shanahan, commander of the 25th Air Force who adds that automation is “a must have”.

Focusing on exploiting specific pieces of data is only one part of the puzzle. A new paradigm for collection must be coupled to the shift from tasking collection to tasking exploitation. Rather than seeking answers to predefined intelligence needs, collection attuned to ABI’s methodology demands seeking data, in order to enable correlations and entity resolution.

9.6 Collecting to Maximize Incidental Gain

The concept of broad collection requirements is not new. ABI, however, is fed by broad requirements for specific data, a new dichotomy not yet confronted by the intelligence and law enforcement communities. This necessitates a change in the tasking and collection paradigms employed in support of ABI, dubbed coarse tasking for discovery.

Decomposing this concept identifies two important parts: the first is the concept of coarse tasking, and the second is the concept of discovery. Coarse tasking first moves collection away from the historical use of collection decks consisting of point targets: specific locations on the Earth. These decks have been used for both airborne and space assets, providing a checklist of targets to service. Coverage of the target in a deck-based system constitutes task fulfillment, and the field of view for a sensor can in many cases cover multiple targets at once.

The tasking model used in collection decks is specific, not coarse, providing the most relevant point of contrast with collection specifically designed for supporting ABI analysis.

Rather than measuring fulfillment via a checklist model, coarse tasking’s goal is to maximize the amount of data (and as a corollary, the amount of potential correlations) in a given collection window. This is made possible because the analytic process of spatiotemporal correlation is what provides information and ultimately meaning to analysts, and the pillar of data neutrality does not force analysts to draw conclusions from any one source, instead relying on the correlations between sources to provide value. Thus, collection for ABI can be best measured through volume, identification and conditioning of relevant metadata features, and spatiotemporal referencing.

9.7 Incidental Collection and Privacy

This approach can raise serious concerns regarding privacy. “Collect it all, sort it out later” is an approach that, when applied to signals intelligence, raised grave concern about the potential for collection against U.S. citizens.

Incidental collection has been portrayed in a negative light with respect to the Section 215 metadata collection program [9]. Part of this, however, is a direct result of the failure of intelligence policy and social norms to keep up with the rapid pace of technological development.

U.S. intelligence agencies, by law, cannot operate domestically, with narrow exceptions carved out for disaster relief functions in supporting roles to lead federal agencies.

. While this book will not delve into textual analysis of existing law and policy, one issue that agencies will be forced to confront is the ability of commercial “big data” companies like Google and Amazon to conduct the kind of precision analysis formerly possible only in a government security context.

10

Data, Big Data, and Datafication

The principle of data neutrality espouses the use of new types of data in new ways. ABI represented a revolution in how intelligence analysts worked with a volume, velocity, and variety of data never before experienced.

10.1 Data

Deriving value from large volumes of disparate data is the primary objective of an intelligence analyst.

Data is comprised of the atomic facts, statistics, observations, measurements, and pieces of information that are the core commodity for knowledge workers like intelligence analysts. Data represents the things we know.
The discipline of intelligence used to be data-poor. The things we did not know, and the data we could not obtain far outnumbered the things we knew and the data we had. Today, the digital explosion complicates the work environment because there is so much data that it is simply not possible to gather, process, visualize, and understand it all. Historical intelligence textbooks describe techniques for reasoning through limited data sets and making informed judgments, but analysts today have the possibility to obtain exceedingly large quantities of data. The key skill now is the ability to triage, prioritize, and correlate information from a giant volume of data.

10.1.1 Classifying Data: Structured, Unstructured, and Semistructured

The first distinction in data management relies on classification of data into one of three categories: structured data, unstructured data, or semistructured data.

Structured Data

SQL works well with relational databases, but critics highlight the lack of portability of SQL queries across RDBMSs from different vendors due to implementation nuances of relational principles and query languages.

As data tables grow in size (number of rows), performance is limited, because many calculations must search the entire table.
Unstructured Data
“Not only SQL” (NoSQL) is a database concept for modeling data that does not fit well into the tabular model in relational databases. There are two classifications of NoSQL databases, key-value and graph.

One of the advantages of NoSQL databases is the property of horizontal scalability, which is also called sharding. Sharding partitions the database into smaller elements based on the value of a field and distributes this to multiple nodes for storage and processing. This improves the performance of calculations and queries that can be processed as subelements of a larger problem using a model called “scatter-gather” where individual processing tasks are farmed out to distributed data storage locations and the resulting calculations are reaggregated and sent to a central location.

Semistructured Data

The term semistructured data is technically a subset of unstructured data and refers to tagged or taggable data that does not strictly follow a tabular or database record format. Examples include markup languages like XML and HTML where the data inside a file may be queried and analyzed with automated processes, but there is no simple query language that is universally applicable.

Semistructured databases do not require formal governance, but operating a large data enterprise without a governance model makes it difficult to find data and maintain interoperability across data sets.

10.1.2 Metadata

Metadata is usually defined glibly as “data about data.” The purpose of metadata is to organize, describe, and identify data. The schema of a database is one type of metadata. The categories of tags used for unstructured or semistructured data sets are also a type of metadata.

Metadata may include extracted or processed information from the actual content of the data.

Clip marks—analyst-annotated explanations of the content of the video—are considered metadata attached to the raw video stream.

Sometimes, the only common metadata between data sets is time and location. We consider these the central metadata values for ABI. The third primary metadata field is a unique identifier. This may include the ID of the individual piece of data or may be associated with a specific object or entity that has a unique identifier. Because one of the primary purposes of ABI is to disambiguate entities and because analytic judgments must be traced to the data used to create it, identifying data with unique identifiers (even across multiple databases) is key to enabling analysis techniques.

10.1.3 Taxonomies, Ontologies, and Folksonomies

A taxonomy is the systematic classification of information, usually into a hierarchical structure of entities of interest.

Because many military organizations and nation-state governments are hierarchical, they are easily modeled in a taxonomy. Also, because the type and classification of military forces (e.g., aircraft, armored infantry, and battleships.) are generally universal across different countries, the relative strength of two different countries is easily compared. Large businesses can be described using this type of information model. Taxonomies consist of classes but only one type of relationship: “is child/subordinate of.”

An ontology “provides a shared vocabulary that can be used to model a domain, that is, the type of objects and or concepts that exist and their properties and relations” (emphasis added) [6, p. 5]. Ontologies are formal and explicit, but unlike taxonomies, they need not be hierarchical.

Most modern problems have evolved from taxonomic classification to ontological classification to include the shared vocabulary for both objects and relationships. Ontologies pair well with the graph-based NoSQL database method. It is important to note that ontologies are formalized, which requires an existing body of knowledge about the problem and data elements.

With the proliferation of unstructured data, user-generated content, and democratized access to information management resources, the term folksonomy evolved to describe the method for collaboratively creating and translating tags to categorize information [7]. Unlike taxonomies and ontologies that are formalized, folksonomies evolve as user-generated tags are added to published content. Also, there is no hierarchical (parent-child) relationship between tags. This technique is useful for highly emergent or little understood problems where an analyst describes attributes of a problem, observations, detections, issues, or objects but the data does not fit an existing model. Over time, as standard practices and common terms are developed, a folksonomy may be evolved into an ontology that is formally governed

10.2 Big Data

Big data is an overarching term that refers to data sets so large and complex they cannot be stored, processed, or used with traditional information management techniques. Altamira’s John Eberhardt defines it as “any data collection that cannot be managed as a single instance”
10.2.1 Volume, Velocity, and Variety…

In 2001, Gartner analyst Doug Laney introduced the now ubiquitous three-dimensional characterization of “big data” as increasing in volume, velocity, and variety [13]:

Volume: The increase in the sheer number and size of records that must be indexed, managed, archived, and transmitted across information systems.

Velocity: The dramatic speed at which new data is being created and the speed at which processing and exploitation algorithms must execute to keep up with and extract value from data in real time. In the big data paradigm, “batch” processing of large data files is insufficient.

Variety: While traditional data was highly structured, organized, and seldom disseminated outside an organization, today’s data sets are mostly unstructured, schema-less, and evolutionary. The number and type of data sets considered for any analytic task is growing rapidly.

Since Laney’s original description of “the three V’s,” a number of additional “V’s” have been proposed to characterize big data problems. Some of these are described as follows:

Veracity: The truth and validity of the data in question. This includes confidence, pedigree, and the ability to validate the results of processing algorithms applied across multiple data sets. Data is meaningless if it is wrong. Incorrect data leads to incorrect conclusions with serious consequences.

Vulnerability: The need to secure data from theft at rest and corruption in motion. Data analysis is meaningless if the integrity and security of the data cannot be guaranteed.

Visualization: Including techniques for making sense of “big data.” (See Chapter 13.)

Variability: The variations in meaning across multiple data sets. Different sources may use the same term to mean different things, or different terms may have the same semantic meaning.

Value: The end result of data analysis. The ability to extract meaningful and actionable conclusions with sufficient confidence to drive strategic actions. Ultimately, value drives the consequence of data and its usefulness to support decision making.

Because intelligence professionals are called on to make judgments, and because these judgments rely on the underlying data, any failure to discover, correlate, trust, understand, or interpret data or processed and derived data and metadata diminishes the value of the entire intelligence process.

10.2.2 Big Data Architecture

“Big data” definitions say that a fundamentally different approach to storage, management, and processing of data is required under this new paradigm, but what are some of the technology advances and system architectural distinctions to enable “big data?”

Most “big data” storage architectures use a key-value store based on Google’s BigTable. Accumulo is a variant of BigTable that was developed by the National Security Agency (NSA) beginning in 2008. Accumulo augments the BigTable data model to add cell-level security, which means that a user or algorithm seeking data from any cell in the database must satisfy a “column visibility” attribute of the primary key.

Hadoop relies on a distributed, scalable Java file system, the Hadoop distributed file system (HDFS), which stores large files (gigabytes to terabytes) across multiple nodes with replication to prevent data loss. Typically, the data is replicated three times, with two local copies and one copy at a geographically remote location.

Recognizing that information is increasingly produced by a number of high-volume, real-time devices and must be integrated and processed rapidly to derive value, IBM began the System S research project as “a programming model and an execution platform for user-developed applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams”.

10.2.3 Big Data in the Intelligence Community

Recognizing that information technology spending across the 17 intelligence agencies accounts for nearly 20% of National Intelligence Program funding, the intelligence community embarked on an ambitious consolidation program called the intelligence community information technology environment (IC-ITE), pronounced “eye-sight”

10.3 The Datafication of Intelligence

In 2013, Kenneth Neil Cukier and Victor Mayer-Schöenberger introduced the term “datafication” to describe the emergent transformation of everything to data. “Once we datafy things, we can transform their purpose and turn the information into new forms of value,” they said.

Over the last 10 years, direct application of commercial “big data” analytic techniques to the intelligence community has thus far missed the mark. There are a number of reasons for this, but first and foremost among them is the fact that a majority of commercial “big data” is exquisitely structured and represents near complete data sets. For example, the record of credit card transactions at a major department store includes only credit card transactions at that department store, and not random string of numbers that might be UPC codes for fruits and vegetables at a cross-town grocery store. In contrast, intelligence data is either typically unstructured text captured in narrative form or arrives as a mixture of widely differing structures.

Furthermore, the nature of intelligence collection—the quest to obtain information on an adversary’s plans and intentions through a number of collection disciplines—all but ensures that the resulting data sets are “sparse,” representing only a small portion or sample of the larger picture from which they are drawn. The difficulty is that unlike the algorithm-based methods applied to commercial big data, it is impossible to know the bounds of the larger data set. Reliable and consistent inference of larger trends and patterns from a limited and unbounded data set is impossible.

This does not mean intelligence professionals cannot learn from and benefit from the commercial sector’s experiences with big data. Indeed, industry has a great deal to offer with respect to data conditioning and system architecture. These aspects of commercial systems designed to enable “big data” analysis will be critical to designing the specialized systems needed to deal with the more complex and sparse types of data used by intelligence analysts.

10.3.1 Collecting It “All”

While commercial entities with consistent data sets may have success using algorithmic prediction of patterns based on dense data sets, the key common methodology between “big data” and ABI is the shift away from sampling information at periodic intervals toward examining massive amounts of information abductively and deductively to identify correlations.

Cukier and Mayer-Schoenberger, in their assessment of the advantages of “n = all,” effectively argue for a move to a more deductive workflow based on data correlations, rather than causation based on sparse data. “n = all” and georeference to discover share the common intellectual heritage predicated on collecting all data in order to focus on correlations in a small portion of the dataset: Collect broadly, condition data, and enable the analyst to both explore and ask intelligence questions of the data.

The approach of “n = all” is the centerpiece of former NSA director general Keith Alexander’s philosophy of “collect it all.” According to a former senior U.S. intelligence official, “rather than look for a single needle in the haystack, his approach was, ‘Let’s collect the whole haystack. Collect it all, tag it, store it… and whatever it is you want, you go searching for it’

10.3.2 Object-Based Production (OBP)

In 2013, Catherine Johnston, director of analysis at the Defense Intelligence Agency (DIA), introduced object-based production (OBP), a new way of organizing information in the datafication paradigm. Recognizing the need to adapt to growing complexity with diminishing resources, OBP implements data tagging, knowledge capture, and reporting by “organizing intelligence around objects of interest”.

OBP addresses several shortfalls. Studies have found that known information was poorly organized, partially because information was organized and compartmented by the owner. Reporting was within INT-specific stovepipes. Further compounding the problem, target-based intelligence aligned around known facilities.

An object- and activity-based paradigm is more dynamic. It includes objects that move, vehicles and people, for which known information must be updated in real time. This complicates timely reporting on the status and location of these objects and creates a confusing situational awareness picture when conflicting information is reported from multiple information owners.

QUELLFIRE is the intelligence community’s program to deliver OBP as an enterprise service where “all producers publish to a unifying object model” (UOM) [27, p. 6]. Under QUELLFIRE, OBP objects are incorporated into the overall common intelligence picture (CIP)/common operating picture (COP) to provide situational awareness.

This focus means that the pedigree of the information is time-dominant and must be continually updated. Additional work on standards and tradecraft must be developed to establish a persistent, long-term repository of worldwide intelligence objects and their behaviors.

10.3.3 Relationship Between OBP and ABI

There has been a general confusion about the differences between OBP and ABI, stemming from the fact that both methods focus on similar data types and are recognized as evolutions in tradecraft. OBP, which is primarily espoused by DIA, the nation’s all-source military intelligence organization, is focused on order-of-battle analysis, technical intelligence on military equipment, the status of military forces, and battle plans and intentions (essentially organizing the known entities). ABI, led by NGA, began with a focus on integrating multiple sources of geospatial information in a geographic region of interest—evolving with the tradecraft of georeference to discover—to the discovery and resolution of previously unknown entities based on their patterns of life. This tradecraft produces new objects for OBP to organize, monitor, warn against, and report… OBP, in turn, identifies knowledge gaps, the things that are unknown that become the focus of the ABI deductive, discovery-based process. Efforts to meld the two techniques are aided by the IC-ITE cloud initiative, which colocates data and improves discoverability of information through common metadata standards.

10.4 The Future of Data and Big Data

Former CIA director David Petraeus highlighted the challenges and opportunities of the Internet of Things in a 2012 speech at In-Q-Tel, the agency’s venture capital research group: “As you know, whereas machines in the 19th century learned to do, and those in the 20th century learned to think at a rudimentary level, in the 21st century, they are learning to perceive—to actually sense and respond” [33]. He further highlighted some of the enabling technologies developed by In-Q-Tel investment companies, listed as follows:

• Item identification, or devices engaged in tagging;
• Sensors and wireless sensor networks—devices that indeed sense and respond;
• Embedded systems—those that think and evaluate;
• Nanotechnology, allowing these devices to be small enough to function virtually anywhere.

In his remarks at the GigaOM Structure:Data conference in New York in 2013, CIA chief technology officer (CTO) Hunt said, “It is nearly within our grasp to compute on all human generated information” [35]. This presents new challenges but also new opportunities for intelligence analysis.

11

Collection

Collection is about gathering data to answer questions. This chapter summarizes the key domains of intelligence collection and introduces new concepts and technologies that have codeveloped with ABI methods. It provides a high-level overview of several key concepts, describes several types of collection important to ABI, and summarizes the value of persistent surveillance in ABI analysis.

11.1 Introduction to Collection

Collection is the process of defining information needs and gathering data to address those needs.

The overarching term for remotely collected information is ISR (intelligence, surveillance, and reconnaissance).

The traditional INT distinctions are described as follows:

• Human intelligence (HUMINT): The most traditional “spy” discipline, HUMINT is “a category of intelligence derived from information collected and provided by human sources” [1]. This information is gathered through interpersonal contact; conversations, interrogations, or other like means.

• Signals intelligence (SIGINT): Intelligence gathering by means of intercepting of signals. In modern times, this refers primarily to electronic signals.

• Communications intelligence (COMINT): A subdiscipline of SIGINT, COMINT refers to the collection of signals that involve the communication between people, defined by the Department of Defense (DoD) as “technical information and intelligence derived from foreign communications by other than the intended recipients” [2]. COMINT exploitation includes language translation.

• Electronic intelligence (ELINT): A subdiscipline of SIGINT, ELINT refers to SIGINT that is not directly involved in communications. An example includes the detection of an early-warning radar installation by means of sensing emitted radio frequency (RF) energy. (This is not COMINT, because the radar isn’t carrying a communications channel).

• Imagery intelligence (IMINT): Information derived from imagery to include aerial and satellite-based photography. The term “IMINT” has generally been superseded by “GEOINT.”

• Geospatial intelligence (GEOINT): A term coined in 2004 to include “imagery, IMINT, and geospatial information” [3], the term GEOINT reflects the concepts of fusion, integration, and layering of information about the Earth.

• Measurement and signals intelligence (MASINT): Technical intelligence gathering based on unique collection phenomena that focus on specialized signatures of targets or classes of targets.

• Open-source intelligence (OSINT): Intelligence derived from public, open information sources. This includes but is not limited to newspapers, magazines, speeches, radio stations, blogs, video-sharing sites, social-networking sites, and government reports.

Each agency was to produce INT-specific expert assessments of collected information that was then forwarded to the CIA for integrative analysis called all-source intelligence. The ABI principle of data neutrality posits that all sources of information should be considered equally as a sources of intelligence.

There are a number of additional subdisciplines under these headings including technical intelligence (TECHINT), acoustic intelligence (ACINT), financial intelligence (FININT), cyber intelligence (CYBINT), and foreign instrumentation intelligence (FISINT) [4].

Despite thousands of airborne surveillance sorties during 1991’s Operation Desert Storm, efforts to reliably locate Iraq’s mobile SCUD missiles were unsuccessful [5]. The problem was further compounded during counterterrorism and counterinsurgency operations in Iraq and Afghanistan where the targets of intelligence collection are characterized by weak signals, ambiguous signatures and dynamic movement. The ability to capture movement intelligence (MOVINT) is one collection modality that contributes to ABI, because it allows direct observation of events and collection of complete transactions.

11.5 Collection to Enable ABI

Traditional collection is targeted, whether the target is a human, a signal, or a geographic location. Since ABI is about gathering all the data and analyzing it with a deductive approach, an incidental collection approach as described in Chapter 9 is more appropriate.

11.6 Persistence: The All-Seeing Eye (?)

For over 2,000 years, military tactics have encouraged the use of the “high ground” for surveillance and reconnaissance of the enemy. From the use of hills and treetops to the advent of military ballooning in the U.S. Civil War to aerial and space-based reconnaissance, nations jockey for the ultimate surveillance high ground. The Department of Defense defines “persistent surveillance” as “a collection strategy that emphasizes the ability of some collection systems to linger on demand in an area to detect, locate, characterize, identify, track, target, and possibly provide battle damage assessment and retargeting in near or real time”.

Popular culture often depicts persistent collection like the all-seeing “Eye of Sauron” in Peter Jackson’s Lord of the Rings trilogy, the omnipresent computer in “Eagle Eye,” or the camera-filled casinos of Ocean’s Eleven, but persistence for intelligence is less about stare and more about sufficiency to answer questions.

In this textbook, persistence is the ability to maintain sufficient frequency, duration, temporal resolution, and spectral resolution to detect change, characterize activity, and observe behaviors. This chapter summarizes several types of persistent collection and introduces the concept of virtual persistence—the ability to maintain persistence of knowledge on a target or set of targets through integration of multiple sensing and analysis modalities.

11.7 The Persistence “Master Equation”

Persistence, P, can be defined in terms of eight fundamental factors:

where
(x, y) is the area coverage usually expressed in square kilometers.
z is the altitude, positive or negative, from the surface of the Earth.
T is the total time, duration, or dwell.
f (or t) is the frequency, exposure time, or revisit rate.

λ is the wavelength (of the electromagnetic spectrum) or the collection phenomenology. Δλ may also be used to represent the discretization of frequency for multisensor collects, spectral sensors, or other means.
σ is the accuracy or precision of the collection or analysis.

θ is the resolution or distinguishability. θ may also express the quality of the information. Π is the cumulative probability, belief, or confidence in the information.

Combinations of these factors contribute to enhanced persistence.

12

Automated Activity Extraction

The New York Times reported that data scientists “spend from 50 to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets” [1]. Pejoratively referred to in the article as “janitor work,” these tasks, also referred to as data wrangling, data munging, data farming, and data conditioning, inhibit progress toward analysis [2]. Conventional wisdom and repeated interviews with data analytics professionals support the “80%” notion [3–5]. Many of these tasks are routine and repetitive: reformatting data into different coordinate systems or data formats; manually tagging objects in imagery and video; backtracking vehicles from destination to origin; and extracting entities and objects in text.

A 2003 study by DARPA in collaboration with several U.S. intelligence agencies found that analysts spend nearly 60% of their time performing research and preparing data for analysis [7]. The so-called bathtub curve, shown in Figure 12.1, shows how a significant percentage of an analyst’s time is spent looking for data (research), formatting it for analysis, writing reports, and working on other administrative tasks. The DARPA study examined whether advances in formation technology such as collaboration and analysis tools could invert the “bathtub curve” so that analysts would spend less time wrestling with data and more time collaborating and performing analytic tasks, finding a significant benefit from new IT-enhanced methods.

As the volume, velocity, and variety of data sources available to intelligence analysts explodes, the problem of the “bathtub curve” gets worse.

12.2 Data Conditioning

Data conditioning is an overarching term describing the preparation of data for analysis and is often associated with “automation” because it involves automated processes to prepare data for analysis.

Historically, the phrase “extract, transform, load” (ETL) referred to a series of basic steps to prepare data for consumption by databases and data services. Often, nuanced ETL techniques were tied to a specific database architecture. Data conditioning includes the following:

• Extracting or obtaining the source data from various heterogeneous data sources or identifying a streaming data source (e.g., RSS feed) that provides continuous data input;
• Reformatting the data so that it is machine-readable and compliant with the target data model;
• Cleaning the data to remove erroneous records and adjusting date/time formats or geospatial coordinate systems to ensure consistency;
• Translating the language of data records as necessary;
• Correcting the data for various biases (e.g., geolocation errors);
• Enriching the data by adding derived metadata fields from the original source data (e.g., enriching spatial data to include a calculated country code);
• Tagging or labeling data with security, fitness-for-use, or other structural tags;
• Georeferencing the data to a consistent coordinate system or known physical locations;
• Loading the data into the target data store consistent with the data model and physical structure of the store;
• Validating that the conditioning steps have been done correctly and that queries produce results that meet mission criteria.

Data conditioning of source data into a spatiotemporal reference frame enables georeference to discover.

While the principle of data neutrality promotes data conditioning from multiple sources, this chapter focuses on a subset of automated activity extraction techniques including automated extraction and geolocation of entities/events from text, extraction of objects/activities from still imagery, and automated extraction of objects, features, and tracks from motion imagery.

12.3 Georeferenced Entity and Activity Extraction

While many applications perform automated text-parsing and entity extraction from unstructured text files, the simultaneous automated extraction of geospatial coordinates is central to enabling the ABI tradecraft of georeference to discover.

Marc Ubaldino, systems engineer and software developer at the MITRE Corporation, described a project called Event Horizon (EH) that “was borne out of an interest in trying to geospatially describe a volume of data—a lot of data, a lot of documents, a lot of things—and put them on a map for analysts to browse, and search, and understand, the details and also the trends” [9]. EH is a custom-developed tool to enable georeference to discover by creating a shapefile database of text documents that are georeferenced to a common coordinate system.

. These simple tools are chained together to orchestrate data conditioning and automated processing steps. According to MITRE, these tools have “decreased the human effort involved in correlating multi-source, multi-format intelligence” [10, p. 47]. This multimillion records corpus of data was first called the “giant load of intelligence” (GLINT). Later, this term evolved to geolocated intelligence.

One implementation of this method is the LocateXT software by ClearTerra, a “commercial technology for analyzing unstructured documents and extracting coordinate data, custom place names, and other critical information into GIS and other spatial viewing platforms” [11]. The tool scans unstructured text documents and features a flexible import utility for structured data (spreadsheets, delimited text). The tool supports all Microsoft Office documents (Word, PowerPoint, Excel), Adobe PDF, XML, HTML, Text, and more. Some of the tasks performed by LocateXT are described as follows [12]:

• Extracting geocoordinates, user-defined place names, dates, and other critical information from unstructured data;
• Identifying and extracting thousands of variations of geocoordinate formats;
• Creating geospatial layers from extracted locations;
• Configuring place name extraction using a geospatial layer or gazetteer file;
• Creating custom attributes by configuring keyword search and extraction controls.

12.4 Object and Activity Extraction from Still Imagery

Extraction of objects, features, and activities from imagery is a core element of GEOINT tradecraft and central to training as an imagery analyst. A number of tools and algorithms have been developed to aid in the manual, computer-assisted, and fully automated extraction from imagery. Feature extraction techniques for geoprocessing buildings, roads, trees, tunnels, and other features are widely applied to commercial imagery and used by civil engineers and city planners.

Most facial recognition approaches follow a four-stage model:
Detect
→ Align
→ Represent
→ Classify.
Much research is aimed at the classify step of the workflow. Facebook’s approach improves performance by applying three-dimensional modeling to the alignment step and deriving the facial representation using a deep neural network.

While Facebook’s research applies to universal face detection, classification in the context of the problem set is significantly easier. When the Facebook algorithm attempts to recognize individuals in submitted pictures, it has information about the “friends” to which the user is currently linked (in ABI parlance, a combination of contextual and relational information). It is much more likely that an individual in a user-provided photograph is related to the user through his or her social network. This property, called local partitioning, is useful for ABI. If an analyst can identify a subset of the data that is related to the target through one or more links (for example, a history of spatial locations previously visited), the dimensionality of the wide area search and targeting problem can be exponentially reduced.

“Recognizing activities requires observations over time, and recognition performance is a function of the discrimination power of a set of observational evidence relative to the structure of a specific activity set” [45]. They highlight the importance of increasingly proliferating persistent surveillance sensors and focus on activities identified by a critical geospatial, temporal, or interactive pattern in highly cluttered environments.

12.6.6 Detecting Anomalous Tracks

Another automation technique that can be applied to wide area data is the detection of anomalous behaviors— that is, “individual tracks where the track trajectory is anomalous compared to a model of typical behavior”

12.7 Metrics for Automated Algorithms

One of the major challenges in establishing revolutionary algorithms for automated activity extraction, identification, and correlation is the lack of standards with which to evaluate performance. DARPA’s PerSEAS program introduced several candidate metrics that are broadly applicable across this class of algorithms…

12.8 The Need for Multiple, Complimentary Sources

In signal processing and sensor theory, the most prevalent descriptive plot is the receiver operating characteristic (ROC) curve, a plot of true positive rate or probability of detection versus FAR.

12.9 Summary

Speaking at the Space Foundation’s National Space Symposium in May 2014, DNI James Clapper said “We will have systems that are capable of persistence: staring at a place for an extended period of time to detect activity; to understand patterns of life; to warn us when a pattern is broken, when the abnormal happens; and even to use ABI methodologies to predict future actions”

The increasing volume, velocity, and variety of “big data” introduced in Chapter 10 requires implementation of automated algorithms for data conditioning, activity/event extraction from unstructured data, object/activity extraction from imagery, and automated detection/tracking from motion imagery.

On the other hand, “Deus ex machina,” Latin for “god from the machine,” is a term from literature when a seemingly impossible and complex situation is resolved with irrational or divine means. Increasingly sophisticated “advanced analytic” algorithms provide the potential to disconnect analysts from the data by simply placing trust in the “magical black box.” In practice, no analyst will trust any piece of data without documented provenance and without understanding exactly how it was collected or processed.

Automation also removes the analyst from the art of performing analysis. Early in the development of the ABI methodology, analysts were forced to do the dumpster diving and “data janitorial work” to condition their own data for analysis. In the course of doing so, analysts were close to each individual record, becoming intimately familiar with the metadata. Often, analysts stumbled upon anomalies or patterns in the course of doing this work. Automated data conditioning algorithms may reformat and “clean” data to remove outliers—but as any statistician knows—all the interesting behaviors are in the tails of the distribution.

13

Analysis and Visualization

Analysis of large data sets increasingly requires a strong foundation in statistics and visualization. This chapter introduces the key concepts behind data science and visual analytics. It demonstrates key statistical, visual, and spatial techniques for analysis of large-scale data sets. The chapter provides many examples of visual interfaces used to understand and analyze large data sets.

13.1 Introduction to Analysis and Visualization

Analysis is defined as “a careful study of something to learn about its parts, what they do, and how they are related to each other”

The core competency of the discipline of intelligence is to perform analysis, deconstructing complex mysteries to understand what is happening and why. Figure 13.1 highlights key functional terms for analysis and the relative benefit/effort required for each.

13.1.1 The Sexiest Job of the 21st Century…

Big-budget motion pictures seldom glamorize the careers of statisticians, operations researchers, and intelligence analysts. Analysts are not used to being called “sexy,” but in a 2012 article in the Harvard Business Review, Thomas Davenport and D. J. Patil called out the data scientist as “the sexiest job of the 21st century” [2]. The term was first coined around 2008 to recognize the emerging job roles associated with large-scale data analytics at companies like Google, Facebook, and LinkedIn. Combining the skills of a statistician, a computer scientist, and a software engineer, the proliferation of data science across commercial and government sectors recognizes that competitive organizations are deriving significant value from data analysis. Today we’re seeing an integration of data science and intelligence analysis, as intelligence professionals are being driven to discover answers in those giant haystacks of unstructured data.

According to Leek, Peng, and Caffo, the key tasks for data scientists are the following [4, p. 2].

• Defining the question;
• Defining the ideal data set;
• Obtaining and cleaning the data;
• Performing exploratory data analysis;
• Performing statistical prediction/modeling;
• Interpreting results;
• Challenging results;
• Synthesizing and writing up and distributing results.

Each of these tasks presents unique challenges. Often, the most difficult step of the analysis process is defining the question, which, in turn, drives the type of data needed to answer it. In a data-poor environment, the most time-consuming step was usually the collection of data; however, in a modern “big data” environment, a majority of analysts’ time is spent cleaning and conditioning the data for analysis. Many of the data sets—even publicly available ones—are seldom well-conditioned for instantaneous import and analysis. Often column headings, date formats, and even individual records may need reformatting before the data can even be viewed for the first time. Messy data is almost always an impetus to rapid analysis, and decision makers have little understanding of the chaotic data landscape experienced by the average data scientist.

13.1.2 Asking Questions and Getting Answers

The most important task for an intelligence analyst is determining what questions to ask. The traditional view of intelligence analysis places the onus of defining the question on the intelligence consumer, typically a policy maker.

Asking questions from a data-driven and intelligence problem–centric viewpoint is the central theme of this textbook and the core analytic focus for the ABI discipline. Sometimes, collected data limits the questions that may be asked. Unanswerable questions define additional data needs, either through collection of processing.

Analysis takes several forms, described as follows:

• Descriptive: Describe a set of data using statistical measures (e.g., census).
• Inferential: Develop trends and judgments about a larger population using a subset of data (e.g., exit polls).
• Predictive: Use a series of data observations to make predictions about the outcomes or behaviors of another situation (e.g., sporting event outcomes).
• Causal: Determine the impact on one variable when you change one more more variables (e.g., medical experimentation).
• Exploratory: Discover relationships and connections by examining data in bulk, sometimes without an initial question in mind (e.g., intelligence data).

Because the primary focus of ABI is discovery, the main branch of analysis applied in this textbook is exploratory analysis.

https://www.jmp.com/en_us/offers/statistical-analysis-software.html

13.2 Statistical Visualization

ABI analysis benefits from the combination of statistical processes and visualization. This section reviews some of the basic statistical functions that provide rapid insight into activities and behaviors.

13.2.1 Scatterplots

One of the most basic statistical tools used in data analysis and quality engineering is the scatterplot or scattergram, a two-dimensional Cartesian graph of two variables.

Correlation, discussed in detail in Chapter 14, is the statistical dependence between two variables in a data set.

13.2.2 Pareto Charts

Joseph Juran, a pioneer in quality engineering, developed the Pareto principle and named it after Italian economist Vilfredo Pareto. Also known as “the 80/20 rule,” the Pareto principle is a common rule of thumb that 80% of observations tend to come from 20% of the causes. In mathematics, this is manifest as a power law, also called the Pareto distribution whose cumulative distribution function is given as:

Where α, the Pareto index, is a number greater than 1 that defines the slope of the Pareto distribution. For an “80/20” power law, α ≈ 1.161. The power law curve appears in many natural processes, especially in information theory. It was popularized in Chris Anderson’s 2006 book The Long Tail: Why the Future of Business is Selling Less of More

A variation on the Pareto chart, called the “tornado chart,” is shown in Figure 13.6. Like the Pareto chart, bars indicate the significance of the contribution on the response but the bars are aligned about a central axis to show the direction of correlation between the independent and dependent variables.

Pareto charts are useful in formulating initial hypotheses about the possible dependence between two data sets or for identifying a collection strategy to reduce the standard error in a model. Statistical correlation using Pareto charts and the Pareto principle is one of the simplest methods for data-driven discovery of important relationships in real-world data sets.

13.2.3 Factor Profiling

Factor profiling examines the relationships between independent and dependent variables. The profiler in Figure 13.7 shows the predicted response (dependent variable) as each independent variable is changed while all others are held constant.

13.3 Visual Analytics

Visual analytics was defined by Thomas and Cook of the Pacific Northwest National Laboratory in 2005 as “the science of analytical reasoning facilitated by interactive visual interfaces” [8]. The emergent discipline combines statistical analysis techniques with increasingly colorful, dynamic, and interactive presentations of data. Intelligence analysts increasingly rely on software tools for visual analytics to understand trends, relationships and patterns in increasingly large and complex data sets. These methods are sometimes the only way to rapidly resolve entities and develop justifiable, traceable stories about what happened and what might happen next.

Large data volumes present several unique challenges. First, just transforming and loading the data is a cumbersome prospect. Most desktop tools are limited by the size of the data table that can be in memory, requiring partitioning before any analysis takes place. The a priori partitioning of a data set requires judgments about where the break points should be placed, and these may arbitrarily steer the analysis in the wrong direction. Large data sets also tend to exhibit “wash out” effects. The average data values make it very difficult to discern what is useful and what is not. In location data, many entities conduct perfectly normal transactions. Entities of interest exploit this effect to effectively hide in the noise.

As dimensionality increases, potential sources of causality and multivariable interactions also increase. This tends to wash out the relative contribution of each variable on the response. Again, another paradox arises: Arbitrarily limiting the data set means throwing out potentially interesting correlations before any analysis has taken place.

Analysts must take care to avoid visualization for the sake of visualization. Sometimes, the graphic doesn’t mean anything or reveal an interesting observation. Visualization pioneer Edward Tufte coined the term “chartjunk” to refer to these unnecessary visualizations in his 1983 book The Visual Display of Quantitative Information, saying:

The interior decoration of graphics generates a lot of ink that does not tell the viewer anything new. The purpose of decoration varies—to make the graphic appear more scientific and precise, to enliven the display, to give the designer an opportunity to exercise artistic skills. Regardless of its cause, it is all non-data-ink or redundant data-ink, and it is often chartjunk.

Michelle Borkin and Hanspeter Pfister of the Harvard School of Engineering and Applied Scientists studied over 5,000 charts and graphics from scientific papers, design blogs, newspapers, and government reports to identify characteristics of the most memorable ones. “A visualization will be instantly and overwhelmingly more memorable if it incorporates an image of a human-recognizable object—if it includes a photograph, people, cartoons, logos—any component that is not just an abstract data visualization,” says Pfister. “We learned that any time you have a graphic with one of those components, that’s the most dominant thing that affects the memorability”

13.4 Spatial Statistics and Visualization

the concept of putting data on a map to improve situational awareness and understanding may seem trite, but the first modern geospatial computer system was not proposed until 1968. While working for the Department of Forestry and Rural Development for the Government of Canada, Roger Tomlinson introduced the term “geographic information system” (now GIS) as a “computer-based system for the storage and manipulation of map-based land data”

13.4.1 Spatial Data Aggregation

A popular form of descriptive analysis using spatial statistical is the use of subdivided maps based on aggregated data. Typical uses include visualization of census data by tract, county, state, or other geographic boundaries.

Using a subset of data to made judgments about a larger population is called inferential analysis.

13.4.2 Tree Maps

Figure 13.10 shows a tree map of spatial data related to telephone call logs for a business traveler.1 A tree map is a technique for visualizing categorical, hierarchical data with nested rectangles

In the Map of the Market, the boxes are categorized by industry, sized by market capitalization, and colored by the change in the stock price. A financial analyst can instantly see that consumer staples are down and basic materials are up. The entire map turns red during a broad sell-off. Variations on the Map of the Market segment the visualization by time so analysts can view data in daily, weekly, monthly, or 52-week increments.

The tree map is a useful visualization for patterns—in this case transactional patterns categorized by location and recipient. The eye is naturally drawn to anomalies in color, shape, and grouping. These form the starting point for further analysis of activities and transactions, postulates of relationships between data elements, and hypothesis generation about the nature of the activities and transactions as illustrated above. While tree maps are not inherently spatial, this application shows how spatial analysis can be incorporated and how the spatial component of transactional data generates new questions and leads to further analysis.

This type of analysis reveals a number of other interesting things about the entity (and related entities) patterns of life elements. If all calls contain only two entities, then when entity A calls entity B, we know that both entities are (likely) not talking to someone else during that time.

13.4.3 Three-Dimensional Scatterplot Matrix

Three-dimensional colored dot plots are widely used in media and scientific visualization because they are complex and compelling. Although it seems reasonable to extend two-dimensional visualizations to three dimensions, these depictions are often visually overwhelming and seldom convey additional information that cannot be viewed using a combination of two-dimensional plots more easily synthesized by humans.

GeoTime is a spatial/temporal visualization tool that plays back spatially enabled data like a movie. It allows analysts to watch entities move from one location to another and interact through events and transactions. Patterns of life are also easily evident in this type of visualization.

Investigators and lawyers use GeoTime in criminal cases to show the data-driven story about an entity’s pattern of movements and activities

13.4.4 Spatial Storytelling

The latest technique incorporated into multi-INT tradecraft and visual analytics is the aspect of spatial storytelling: using data about time and place to animate a series of events. Several statistical analysis tools implemented storytelling or sequencing capabilities

Online spatial storytelling communities have developed as collaborative groups of data scientists and geospatial analysts combine their tradecraft with increasingly proliferated spatially-enabled data. The MapStory Foundation, a 501(c)3 educational organization founded in 2011 by social entrepreneur Chris Tucker developed an open, online platform for sharing stories about our world and how it develops over time.

13.5 The Way Ahead
Visualizing relationships across large, multidimensional data sets quickly requires more real estate than the average desktop computer. NGA’s 24-hour operations center, with a “knowledge wall” comprised of 56 eighty-inch monitors, was inspired by the television show “24”

There are several key technology areas that provide potential for another paradigm shift in how analysts work with data. Some of the benefits of these technological advances were highlighted by former CIA chief technology officer Gus Hunt at a 2010 forum on big data analytics:

Elegant, powerful, and easy-to-use tools and visualizations;
• Intelligent systems that learn from the user;
• Machines to do more of the heavy lifting;
• A move to correlation, not search;
• A “curiosity layer”—signifying machines that are curious on your behalf.

14

Correlation and Fusion

Correlation of multiple sources of data is central to the integration before exploitation pillar of ABI and was the first major analytic breakthrough in combating adversaries that lack signature and doctrine.

Fusion, whether accomplished by a computer or a trained analyst, is central to this task. The suggested readings for this chapter alone fill several bookshelves.

Data fusion has evolved over 40 years into a complete discipline in its own right. This chapter provides a high-level overview of several key concepts in the context of ABI processing and analysis while directing the reader to further detailed references on this ever evolving topic.

14.1 Correlation

Correlation is the tendency of two variables to be related to each other. ABI relies heavily on correlation between multiple sources of information to understand patterns of life and resolve entities. The terms “correlation” and “association” are closely related.

A central principle of ABI is the need to correlate data from multiple sources—data neutrality—without a priori regard for the significance of data. In ABI, correlation leads to discovery of significance.

Scottish philosopher David Hume, in his 1748 book An Enquiry Concerning Human Understanding, defined association in terms of resemblance, contiguity [in time and place], and causality Hume says, “The thinking on any object readily transports the mind to what is contiguous”—an eighteenth-century statement of georeference to discover [1].

14.1.1 Correlation Versus Causality

One of the most oft-quoted maxims in data analysis is “correlation does not imply causality.

Many doubters of data science and mathematics use this sentence to deny any analytic result, dismissing a statistically valid fact as “pish posh.” Correlation can be a powerful indicator of possible causality and a clue for analysts and researchers to continue an investigative hypothesis.

In Thinking, Fast and Slow, Kahneman notes that we “easily think associatively, we think metaphorically, we think causally, but statistics requires thinking about many things at once,” which is difficult for humans to do without great effort.

The only way to prove causality is through controlled experiments where all external influences are carefully controlled and their responses measured. The best example of controlled evaluation of causality is through pharmaceutical trials, where control groups, blind trials, and placebos are widely used.

In the discipline of intelligence, the ability to prove causality is effectively zero. Subjects of analysis are seldom participatory. Information is undersampled, incomplete, intermittent, erroneous, and cluttered. Knowledge lacks persistence. Sensors are biased. The most important subjects of analysis are intentionally trying to deceive you. Any medical trial conducted under these conditions would be laughably dismissed.

Remember: correlations are clues and indicators to dig deeper. Just as starts and stops are clues to begin transactional analysis at a location, the presence of a statistical correlation or a generic association between two factors is a hint to begin the process of deductive or abductive analysis there. Therefore, statistical analysis of data correlation is a powerful tool to combine information from multiple sources through valid, justifiable mathematical relationships, avoiding the human tendency to make subjective decisions based on known, unavoidable, irrational biases.

14.2 Fusion

The term “fusion” refers to “the process or result of joining two or more things together to form a single entity” [6]. Waltz and Llinas introduce the analogy of the human senses, which readily and automatically combine data from multiple perceptors (each with different measurement characteristics) to interpret the environment.

Fusion is the process of disambiguating of two or more objects, variables, measurements, or entities that asserts—with a defined confidence value—that the two elements are the same. Simply put, the difference between correlation and fusion is that correlation says “these two elements are related.” Fusion says “these two objects are the same.”

Data fusion “combines data from multiple sensors and related information to achieve more specific inferences than could be achieved by using a single, independent sensor”

The evolution of data fusion methods since the 1980s recognizes that fusion of information to improve decision making is a central process in many human endeavors, especially intelligence. Data fusion has been recognized as a mathematical discipline in its own right, and numerous conferences and textbooks have been dedicated to the subject.

The mathematical techniques for data fusion can be applied to many problems in information theory such as intelligence analysis and ABI. They highlight the often confusing terminology used by multiple communities (see Figure 14.1) that rely on similar mathematical techniques with related objectives. Target tracking, for example, is a critical enabler for ABI but is only a small subset of the techniques in data fusion and information fusion.

14.2.1 A Taxonomy for Fusion Techniques

Recognizing that “developing cost-effective multi-source information systems requires a standard method for specifying data fusion processing and control functions, interfaces, and associated databases,” the Joint Directors of Laboratories (JDL) proposed a general taxonomy for data fusion systems in the 1980s.
The fusion levels defined by the JDL are as follows:

• Source preprocessing, sometimes called level 0 processing, is data association and estimation below the object level. This step was added to the three-level model to reflect the need to combine elementary data (pixel level, signal level, character level) to determine an object’s characteristics. Detections are often categorized as level 0.
• Level 1 processing, object refinement, combines sensor data to estimate the attributes or characteristics of an object to determine position, velocity, trajectory, or identity. This data may also be used to estimate the future state of the object. Hall and Llinas include sensor alignment, association, correlation, correlation/tracking, and classification in level 1 processing [11].
• Level 2 processing, situation refinement, “attempts to develop a description of current relationships among entities and events in the context of their environment” [8, p. 9]. Contextual information about the object (e.g., class of object and kinematic properties), the environment (e.g., the object is present at zero altitude in a body of water), or other related objects (e.g., the object was observed coming from a destroyer) refines state estimation of the object. Behaviors, patterns, and normalcy are included in level 2 fusion.
• Level 3 processing, threat refinement or significance estimation, is a high-level fusion process that characterizes the object and draws inferences in the future based on models, scenarios, state information, and constraints. Most advanced fusion research focuses on reliable level 3 techniques. This level includes prediction of future events and states.
• Level 4 processing, process refinement, augmented the original model by recognizing that continued observations can feed back into fusion and estimation processes to improve overall system performance. This can include multiobjective optimization or new techniques to fuse data when sensors operate on vastly different timescales [12, p. 12].
• Level 5 processing, cognitive refinement or human/computer interface, recognizes the role of the user in the fusion process. Level 5 includes approaches for fusion visualization, cognitive computing, scenario analysis, information sharing, and collaborative decision making. Level 5 is where the analyst performs correlation and fusion for ABI.

The designation as “levels” may be confusing to apprentices in the field as there is no direct correlation to the “levels of knowledge” associated with knowledge management. The JDL fusion levels are more accurately termed categories; a single piece of information does not have to traverse all five “levels” to be considered fused.

According to Hall and Llinas, “the most mature area of data fusion process is level 1 processing,” and a majority of applications fall into or include this category. Level 1 processing relies on estimation techniques such as Kalman filters, MHT, or joint probabilistic data association.

Data fusion applications for detection, identification, characterization, extraction, location, and tracking of individual objects fall in level 1. Additional higher level techniques that consider the behaviors of that object in the context of its surroundings and possible courses of action are techniques associated with levels 2 and 3. These higher level processing methods are more akin to analytic “sensemaking” performed by humans, but computational architectures that perform mathematical fusion calculations may be capable of operating with greatly reduced decision timelines. A major concern of course is turning what amounts to decision authority to silicon-based processors and mathematical algorithms, especially when those algorithms are difficult to calibrate.

14.2.2 Architectures for Data Fusion

The voluminous literature on data fusion includes several architectures for data fusion that follow the same pattern. Hall and Llinas propose three alternatives:

1. “Direct fusion of sensor data;

2. Representation of sensor data via feature vectors, with subsequent fusion of the feature vectors;

3. Processing of each sensor to achieve high-level inferences or decisions, which are subsequently combined [8].”

14.3 Mathematical Correlation and Fusion Techniques

Most architectures and applications for multi-INT fusion, at their cores, rely on various mathematical techniques for conditional probability assessment, hypothesis management, and uncertainty quantification/propagation. The most basic and widely used of these techniques, Bayes’s theorem, Dempster-Shafer theory, and belief networks, are discussed in this section.

14.3.1 Bayesian Probability and Application of Bayes’s Theorem

One of the most widely used techniques in information theory and data fusion is Bayes’s theorem. Named after English clergyman Thomas Bayes who first documented it in 1763, the relation is statement of conditional probability and its dependence on prior information. Bayes’s theorem calculates the probability of an event, A, given information about event B and information about the likelihood of one event given the other. The standard form of Bayes’s theorem is given as:

where

P(A) is the prior probability, that is, the initial degree of belief in event A;

P(A|B) is the conditional probability of A given that event B occurred (also called the posterior probability in Bayes’s theorem);

P(B|A) is the conditional probability of B, given that event A occurred, also called the likelihood; P(B) is the probability of event B.

This equation is sometimes generalized as:

or, said as “the posterior is proportional to the likelihood times the prior” as:

Sometimes, Bayes’s theorem is used to compare two competing statements or hypotheses, and given as the form:

where P(¬A) is the probability of the initial belief against event A, or 1–P(A), and P(B|¬A) is the conditional probability or likelihood of B given that event A is false.

Taleb explains that this type of statistical thinking and inferential thinking is not intuitive to most humans due to evolution: “consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers”

In the world of prehistoric man, those who treated these statements as equivalent probably increased their probability of staying alive. In the world of statistics, these are two different statements that can be represented probabilistically. Bayes’s theorem is useful in calculating quantitative probabilities of events based on observations of other events, using the property of transitivity and priors to calculate unknown knowledge from that which is known. In ABI, it is used to formulate a probability-based reasoning tree for observable intelligence events.

Application of Bayes’s Theorem to Object Identification

The frequency or rarity of the objects in step 1 of Figure 14.7 is called the base rate. Numerous studies of probability theory and decision-making show that humans tend to overestimate the likelihood of events with low base rates. (This tends to explain why people gamble). Psychologists Amos Tversky and Daniel Kahneman refer to the tendency to overestimate salient, descriptive, and vivid information at the expense of contradictory statistical information as the representativeness heuristic [15].

The CIA examined Bayesian statistics in the 1960s and 1970s as an estimative technique in a series of articles in Studies in Intelligence. An advantage of the method noted by CIA researcher, Jack Zlotnick is that the analyst makes “sequence of explicit judgments on discrete units” of evidence rather than “a blend of deduction, insight, and inference from the body of evidence as a whole” [16]. He notes, “The research findings of some Bayesian psychologists seem to show that people are generally better at appraising a single item of evidence than at drawing inferences from the body of evidence considered in the aggregate” [17].

The process for Bayesian combination of probability distributions from multiple sensors to produce a fused entity identification is shown in Figure 14.8. Each individual sensor produces a declaration matrix, which is that sensor’s declarative view object’s identity based on its attributes—sensed characteristics, behaviors, or movement properties. The individual probabilities are combined jointly using the Bayesian formula. Decision logic is applied to select the MAP that represents the highest probabilities of correct identity. Decision rules can also be applied to threshold the MAP based on constraints or to apply additional deductive logic from other fusion processes. The resolved entity is declared (with an associated probability). When used with a properly designed multisensor data management system, this declaration maintains provenance back to the original sensor data.

Bayes’s formula provides a straightforward, easily programmed mathematical formulation for probabilistic combination of multiple sources of information; however, it does not provide a straightforward representation for a lack of information. A modification of Bayesian probability called Dempster-Shafer theory introduces additional factors to address this concern.

14.3.2 Dempster-Shafer Theory

Dempster-Shafer theory is a generalization of Bayesian probability based on the integration of two principles. The first is belief functions, which allow for the determination of belief from one question on the subjective probabilities for a related question. The degree to which the belief is transferrable depends on how related the two questions are and the reliability of the source [18]. The second principle is Dempster’s composition rule, which allows independent beliefs to be combined into an overall belief about each hypothesis [19]. According to Shafer, “The theory came to the attention of artificial intelligence (AI) researchers in the early 1980s, when they were trying to adapt probability theory to expert systems” [20]. Dempster-Shafer theory differs from the Bayesian approach in that the belief in a fact and the opposite of that fact does not need to sum to 1; that is, the method accounts for the possibility of “I don’t know.” This is a useful property for multisource fusion especially in the intelligence domain.

Multisensor fusion approaches use Dempster-Shafer theory to discriminate objects by treating observations from multiple sensors as belief functions based on the object and properties of the sensor. Instead of combining conditional probabilities for object identification as shown in Figure 14.8, the process for fusion proposed by Waltz and Llinas is modified for the Dempster-Shafer approach in Figure 14.9. Mass functions replace conditional probabilities, and Dempster’s combination rule accounts for the additional uncertainty when the sensor cannot resolve the target. The property of normalization by the null hypothesis is also important because it removes the incongruity associated with sensors that disagree.

Although this formulation adds more complexity, it is still easily programmed into a multisensor fusion system. The Dempster-Shafer technique can also be easily applied to quantify beliefs and uncertainty for multi-INT analysis including the beliefs of members of an integrated analytic working group.

In plain English, (14.10) says, “The joint belief in hypothesis H given evidence E1 and E2 is the sum of 1) the belief in H given confirmatory evidence from both sensors, 2) the belief in H given confirmatory evidence from sensor 1 [GEOINT] but with uncertainty about the result from sensor 2, and 3) the belief in H given confirmatory evidence from sensor 2 [SIGINT] but with uncertainty about the result from sensor 1.”

The final answer is normalized to remove dissonant values by dividing each belief by (1-d). The final beliefs are the following:

• Zazikistan is under a coup = 87.9%;
• Zazikistan is not under a coup = 7.8%;
• Unsure = 4.2%;
• Total = 100%.

Repeating the steps above, substituting E1*E2 for the first belief and E3 as the second belief, Dempster’s rule can again be used to combine the beliefs for the three sensors:

• Zazikistan is under a coup = 95.3%;
• Zazikistan is not under a coup = 4.2%;
• Unsure = 0.5%;
• Total = 100%.

In this case, because the HUMINT source has only contributes 0.2 toward the belief in H, the probability that Zazikistan is under a coup actually decreases slightly. Also, because this source has a reasonably low value of u, the uncertainty was further reduced.

While the belief in the coup hypothesis is 92.7%, the plausibility is slightly higher because the analyst must consider the belief in hypothesis H as well as the uncertainty in the outcome. The plausibility of a coup is 93%. Similarly, the plausibility in Hc also requires addition of the uncertainty: 7.3%. These values sum to greater than 100% because the uncertainty between H and Hc makes either outcome equally likely in the rare case that all four sensors produce faulty evidence.

14.3.3 Belief Networks

A belief network2 is a “a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG)” [22]. This technique allows chaining of conditional probabilities—calculated either using Bayesian theory or Dempster-Shafer theory—for a sequence of possible events. In one application, Paul Sticha and Dennis Buede of HumRRO and Richard Rees of the CIA developed APOLLO, a computational tool for reasoning through a decision-making process by evaluating probabilities in Bayesian networks. Bayes’s rule is used to multiply conditional probabilities across each edge of the graph to develop an overall probability for certain outcomes with the uncertainty for each explicitly quantified.

14.4 Multi-INT Fusion For ABI

Correlation and fusion is central to the analytic tradecraft of ABI. One application of multi-INT correlation is to use the strengths of one data source to compensate for the weaknesses in another. SIGINT, for example, is exceptionally accurate in verifying identity through proxies because many signals have unique identifiers that are broadcast in the signal like the Maritime Mobile Service Identity (MMSI) in the ship-based navigation system, AIS. Signals also may include temporal information, but SIGINT is accurate in the temporal domain because radio waves propagate at the speed of light—if sensors are equipped with precise timing capabilities, the exact time of the signal emanation is easily calculated. Unfortunately, because direction-finding and triangulation are usually required to locate the point of origin, SIGINT has measurable but significant errors in position (depending on the properties of the collection system). GEOINT on the other hand is exceptionally accurate in both space and time. A GEOINT collection platform knows when and where it was when it passively collected photons coming off a target. This error can be easily propagated to the ground using a sensor model.

The ability to correlate results of wide area collection with precise, entity resolving, narrow field-of-regard collection systems is an important use for ABI fusion and an area of ongoing research.

Hard/soft fusion is a promising area of research that enables validated correlation of information from structured remote sensing assets with human-focused data sources including the tacit knowledge of intelligence analysts. Gross et al. developed a framework for fusing hard and soft data under a university research initiative that included ground-based sensors, tips to law enforcement, and local news reports [28]. The University at Buffalo (UB) Center for Multisource Information Fusion (CMIF) is the leader of a multi-university research initiative (MURI) developing “a generalized framework, mathematical techniques, and test and evaluation methods to address the ingestion and harmonized fusion of hard and soft information in a distributed (networked) Level 1 and Level 2 data fusion environment”

14.5 Examples of Multi-INT Fusion Programs

In addition to numerous university programs developing fusion techniques and frameworks, automated fusion of multiple sources is an area of ongoing research and technology development, especially at DARPA, federally funded research and development corporations (FFRDCs), and the national labs.

14.5.1 Example: A Multi-INT Fusion Architecture

Simultaneously, existing knowledge from other sources (in the form of node and link data) and tracking of related entities is combined through association analysis to produce network information. The network provides context to the fused multi-INT entity/object tracks to enhance entity resolution. Although entity resolution can be performed at level 2, this example highlights the role of human-computer interaction (level 5 fusion) in integration-before-exploitation to resolve entities. Finally, feedback from the fused entity/object tracks is used to retask GEOINT resources for collection and tracking in areas of interest.

14.5.2 Example: The DARPA Insight Program

In 2010, DARPA instituted the Insight program to address a key shortfall for ISR systems: “the lack of a capability for automatic exploitation and cross-cueing of multi-intelligence (multi-INT) sources

Methods like Bayesian fusion and Dempster-Shafer theory are used to combine new information inputs from steps 3, 4, 7, and 8. Steps 2 and 6 involve feedback to the collection system based on correlation and analysis to obtain additional sensor-derived information to update object states and uncertainties.

The ambitious program seeks to “automatically exploit and cross-cue multi-INT sources” to improve decision timelines and automatically collect the next most important piece of information to improve object tracks, reduce uncertainty, or anticipate likely courses of action based on models of the threat and network.

14.6 Summary

Analysts practice correlation and fusion in their workflows—the art of multi-INT. However, there are numerous mathematical techniques for combining information with quantifiable precision. Uncertainty can be propagated through multiple calculations, giving analysts a hard, mathematically rigorous measurement of multisource data. The art and science of correlation do not play well together, and the art often wins over the science. Most analysts prefer to correlate information they “feel” is related. Efforts to integrate structured mathematical techniques with the human-centric process of developing judgments must be developed. Hybrid techniques that quantify results with science but leave room for interpretation may advance the tradecraft but are not widely used in ABI today.

15
Knowledge Management

Knowledge is value-added information that is integrated, synthesized, and contextualized to make comparisons, assess outcomes, establish relationships, and engage decision-makers in discussion. Although some texts make a distinction between data, information, knowledge, wisdom, and intelligence, we define knowledge as the totality of understanding gained through repeated analysis and synthesis of multiple souces of information over time. Knowledge is the essence of an intelligence professional and is how he or she answers questions about key intelligence issues. This chapter introduces elements of the wide-ranging discipline of knowledge management in the context of ABI tradecraft and analytic methods. Concepts for capturing tacit knowledge, linking data using dynamic graphs, and sharing intelligence across a complex, interconnected enterprise are discussed.

15.1 The Need for Knowledge Management

Knowledge management is a term that first appeared in the early 1990s, recognizing that the intellectual capital of an organization provided competitive advantage and must be managed and protected. Knowledge management is a comprehensive strategy to get the right information to the right people at the right time so they can do something about it. So-called intelligence failures seldom stem from the inability to collect information, but rather the ability to integrate intelligence with sufficient confidence to make decisions that matter.

Gartner’s Duhon defines knowledge management (KM) as:

a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise’s information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers [4].

This definition frames the discussion in this chapter. The ABI approach treats data and knowledge as an asset— and the principle of data neutrality says that all these assets should be considered as equally “important” in the analysis and discovery process. Some knowledge management approaches are concerned with knowledge capture, that is, the institutional retention of intellectual capital possessed by retiring employees. Others are concerned with knowledge transfer, the direct conveyance of such a body of knowledge from older to younger workers through observation, mentoring, comingling, or formal apprenticeships. Much of the documentation in the knowledge management field focuses on methods for interviewing subject matter experts or eliciting knowledge through interviews. While these are important issues in the intelligence community, “increasingly, the spawning of knowledge involves a partnership between human cognition and machine-based intelligence”.

15.1.1 Types of Knowledge

Knowledge is usually categorized into two types, explicit and tacit knowledge. Explicit knowledge is that which is formalized and codified. This is sometimes called “know what” and is much easier to document, store, retrieve, and manage. Knowledge management systems that focus only on the storage and retrieval of explicit knowledge are more accurately termed information management systems, as most explicit knowledge is stored in databases, memos, documents, reports, notes, and digital data files.

Tacit knowledge is intuitive knowledge based on experience, sometimes called “know-how.” Tacit knowledge is difficult to document, quantify, and communicate to another person. This type of knowledge is usually the most valuable in any organization. Lehaney notes that the only sustainable competitive advantage and “the locus of success in the new economy is not in the technology, but in the human mind and organizational memory” [6, p. 14]. Tacit knowledge is intensely contextual and personal. Most people are not aware of the tacit knowledge they inherently possess and have a difficult time quantifying what they “know” outside of explicit facts.

In the intelligence profession, explicit knowledge is easily documented in databases, but of greater concern is the ability to locate, integrate, and disseminate information held tacitly by experienced analysts. ABI requires analytic mastery of explicit knowledge about an adversary (and his or her activities and transactions) but also requires tacit knowledge of an analyst to understand why the activities and transactions are occurring.

While many of the methods in this textbook refer to the physical manipulation of explicit data, it is important to remember the need to focus on the “art” of multi-INT spatiotemporal analytics. Analysts exposed to repeated patterns of human activity develop a natural intuition to recognize anomalies and understand where to focus analytic effort. Sometimes, this knowledge is problem-, region-, or group-specific. More often than not, individual analysts have a particular knack or flair for understanding activities and transactions of certain types of entities. Often, tacit knowledge provides the turning point in a particularly difficult investigation or unravels the final clue in a particularly difficult intelligence mystery, but according to Meyer and Hutchinson, individuals tend to place more weight on concrete and vivid information over that which is intangible and ambiguous [7, p. 46]. Effectively translating ambiguous tacit knowledge like feelings and intuition into explicit information is critical in creating intelligence assessments. This is the primary paradox of tacit knowledge; it often has the greatest value but is the most difficult to elicit and apply.

Amassing facts rarely leads to innovative and creative breakthroughs. The most successful organizations are those that can leverage both types of knowledge for dissemination, reproduction, modification, access, and application throughout the organization.

15.2 Discovery of What We Know

As the amount of information available continues to grow, knowledge workers spend an increasing amount of their day messaging, tagging, creating documents, searching for information, and performing queries and other information-focused activities [9, p. 114]. New concepts are needed to enhance discovery and reduce the entropy associated with knowledge management.

15.2.1 Recommendation Engines

Content-based filtering identifies items based on an analysis of the item’s content as specified in metadata or description fields. Parsing algorithms extract common keywords to build a profile for each item (in our case, for each data element or knowledge object). Content-based filtering systems generally do not evaluate the quality, popularity, or utility of an item.

In collaborative filtering, “items are recommended based on the interests of a community of users without any analysis of item content” [10]. Collaborative filtering ties interest in items to particular users that have rated those items. This technique is used to identify similar users: the set of users with similar interests. In the intelligence case, these would be analysts with an interest in similar data.

A key to Amazon’s technology is the ability to calculate the related item table offline, storing this mapping structure, and then efficiently using this table in real time for each user based on current browsing history. This process is described in Figure 15.1. Items the customer has previously purchased, favorably reviewed, or items currently in the shopping cart are treated with greater affinity than items browsed and discarded. The “gift” flag is used to identify anomalous items purchased for another person with different interests so these purchases do not skew the personalized recommendation scheme.
In ABI knowledge management, knowledge about entities, locations, and objects is available through object metadata. Content-based filtering identifies similar items based on location, proximity, speed, or activities in space and time. Collaborative filtering can be used to discover analysts working on similar problems based on their queries, downloads, and exchanges of related content. This is an internal application of the “who-where” tradecraft, adding additional metadata into “what” the analysts are discovering and “why” they might need it.

15.2.2 Data Finds Data

An extension of the recommendation engine concept to next-generation knowledge management is an emergent concept introduced by Jeff Jonas and Lisa Sokol called “data finds data.” In contrast to traditional query-based information systems, Jonas and Sokol posit that if the system knew what the data meant, it would change the nature of data discovery by allowing systems to find related data and therefore interested users. They explain:

With interest in a soon-to-be-released book, a user searches Amazon.com for the title, but to no avail. The user decides to check every month until the book is released. Unfortunately for the user, the next time he checks, he finds that the book is not only sold out but now on back order, awaiting a second printing. When the data finds the data, the moment this book is available, this data point will discover the user’s original query and automatically email the user about the book’s availability.

Jonas, now chief scientist at IBM’s entity analytics group joined the firm after Big Blue acquired his company, SRD, in 2005. SRD developed data accumulation and alerting systems for Las Vegas casinos including non-obvious relationship analysis (NORA), famous for breaking the notorious MIT card counting ring in the best-selling book Bringing Down the House [12]. He postulates that knowledge-rich but discovery-poor organizations derive increasing wealth from connecting information across previously disconnected information silos using real-time “perpetual analytics” [13]. Instead of processing data using large bulk algorithms, each piece of data is examined and correlated on ingest for its relationship to all other accumulated content and knowledge in the system. Such a context-aware data ingest system is a computational embodiment of integrate before exploit, as each new piece of data is contextualized, sequence-neutrally of course, with the existing knowledge corpus across silos. Jonas says, “If a system does not assemble and persistent context as it comes to know it… the computational costs to reconstruct context after the facts are too high”

Jonas elaborated on the implication of these concepts in a 2011 interview after the fact: “There aren’t enough human beings on Earth to think of every smart question every day… every piece of data is the question. When a piece of data arrives, you want to take that piece of data and see how it relates to other pieces of data. It’s like a puzzle piece finding a puzzle” [15, 16]. Treating every piece of data as the question means treating data as queries and queries as data.

15.2.3 Queries as Data

Information requests and queries are themselves a powerful source of data that can be used to optimize knowledge management systems or assist the user in discovering content.

15.3 The Semantic Web

The semantic web is a proposed evolution of the World Wide Web from a document-based structured designed to be read by humans to a network of hyperlinked, machine-readable web pages that contain metadata about the content and how multiple pages are related to each other. The semantic web is about relationships.

Although the original concept was proposed in the 1960s, the term “semantic web” and its application to an evolution of the Internet was popularized by Tim Berners-Lee in a 2001 article in Scientific American. He posits that the semantic web “will usher in significant new functionality as machines become much better able to process and ‘understand’ the data that they merely display at present” [18].
The semantic web is based on several underlying technologies, but the two basic and powerful ones are the extensible markup language (XML) and the resource description framework (RDF).

15.3.1 XML

XML is a World Wide Web Consortium (W3C) standard for encoding documents that is both human-readable and machine-readable [19].

XML can also be used to create relational structures. Consider the example shown below where there are two data sets consisting of locations (locationDetails) and entities (entityDetails) (adapted from [20]):

<locationDetails>

<location ID=”L1”>
<cityName>Annandale</cityName>
<stateName>Virginia</stateName>
</location ID>
<location ID=”L2”>
<cityName>Los Angeles</cityName>
<stateName>California</stateName>
</location ID>
</locationDetails>
<entityDetails>
<entity locationRef=”L1”>
<entityName>Patrick Biltgen</entityName>
</entity>
<entity locationRef=”L2”>
<entityName>Stephen Ryan</entityName>
</entity>
</entityDetails>

Instead of including the location of each entity as an attribute within entityDetails, the structure above links each entity to a location using the attribute locationRef. This is similar to how a foreign key works in a relational database. One advantage to using this structure is that the two entities can be linked to multiple locations, especially when their location is a function of time and activity.

XML is a flexible, adaptable resource for creating documents that are context-aware and can be machine parsed using discovery and analytic algorithms.

15.4 Graphs for Knowledge and Discovery

Graphs are a mathematical construct consisting of nodes (vertices) and edges that connect them. .

Problems can be represented by graphs and analyzed using the discipline of graph theory. In information systems, graphs represent communications, information flows, library holdings, data models, or the relationships in a semantic web. In intelligence, graph models are used to represent processes, information flows, transactions, communications networks, order-of-battle, terrorist organizations, financial transactions, geospatial networks, and the pattern of movement of entities. Because of their widespread applicability and mathematical simplicity, graphs provide a powerful construct for ABI analytics.
Graphs are drawn with dots or circles representing each node and an arc or line between nodes to represent edges as shown in Figure 15.3. Directional graphs use arrows to depict the flow of information from one node to another. When graphs are used to represent semantic triplestores, the direction of the arrow indicates the direction of the relationship or how to read the simple sentence. Figure 5.8 introduced a three-column framework for documenting facts, assessments, and gaps. This information is depicted as a knowledge graph in Figure 15.3. Black nodes highlight known information, and gray nodes depict knowledge gaps. Arrow shading differentiates facts from assessments and gaps. Shaded lines show information with a temporal dependence like the fact that Jim used to live somewhere else (a knowledge gap because we don’t know where). Implicit relationships can also be documented using the knowledge graph: Figure 5.8 contains the fact “the coffee shop is two blocks away from Jim’s office.”
The knowledge graph readily depicts knowns and unknowns. Because this construct can also be depicted using XML tags or RDF triples, it also serves as a machine-readable construct that can be passed to algorithms for processing.

Graphs are useful as a construct for information discovery when the user doesn’t necessarily know the starting point for the query. By starting at any node (a tip-off ), an analyst can traverse the graph to find related information. This workflow is called “know-something-find-something.” A number of heuristics for graph-based search assist in the navigation and exploration of large, multidimensional graphs that are difficult to visualize and navigate manually.

Deductive reasoning techniques integrate with graph analytics through manual and algorithmic filtering to quickly answer questions and convey knowledge from the graph to the human analyst. Analysts filter by relationship type, time, or complex queries about the intersection between edges and nodes to rapidly identify known information and highlight knowledge gaps.

15.4.1 Graphs and Linked Data

Chapter 10 introduced graph databases as a NoSQL construct for storing data that requires a flexible, adaptable schema. Graphs—and graph databases—are a useful construct for indexing intelligence data that is often held across multiple databases without requiring complex table joins and tightly coupled databases.

Using linked data, an analyst working issue “ C” can quickly discover the map and report directly connected to C, as well as the additional reports linked to related objects. C can also be packaged as a “super object” that contains an instance of all linked data with some number of degrees of separation—calculated by the number of graph edges—away from the starting object. The super object is essentially a stack of relationships to the universal resource identifiers (URIs) for each of the related object, documented using RDF triples or XML tags.

15.4.2 Provenance

Provenance is the chronology of the creation, ownership, custody, change, location, and status of a data object. The term was originally used in relation to works of art to “provide contextual and circumstantial evidence for its original production or discovery, by establishing, as far as practicable, its later history, especially the sequences of its formal ownership, custody, and places of storage” [22]. In law, the concept of provenance refers to the “chain of custody” or the paper trail of evidence. This concept logically extends to the documentation of the history of change of data in a knowledge system.
The W3C implemented a standard for provenance in 2013, documenting it as “information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness” [23]. The PROV-O standard is web ontology language 2.0 (OWL2) ontology that maps the PROV logical data model to RDF [24]. The ontology describes hundreds of classes, properties, and attributes.

Maintaining provenance across a knowledge graph is critical to assembling evidence against hypotheses. Each analytic conclusion must be traced back to each piece of data that contributed to the conclusion. Although ABI methods can be enhanced with automated analytic tools, analysts at the end of the decision chain need to understand how data was correlated, combined, and manipulated through the analysis and synthesis process. Ongoing efforts across the community seek to document a common standard for data interchange and provenance tracking.

15.4.3 Using Graphs for Multianalyst Collaboration

In the legacy, linear TCPED model, when two agencies wrote conflicting reports about the same object, both reports promulgated to the desk of the all-source analyst. He or she adjudicated the discrepancies based on past experience with both sources. Unfortunately, the incorrect report often persisted—to be discovered in the future by someone else—unless it was recalled. Using the graph construct to organize around objects makes it easier to discover discrepancies that can be more quickly and reliably resolved. When analyzing data spatially, these discrepancies are instantly visible to the all-source analyst because the same object simultaneously appears in two places or states. Everything happens somewhere, and everything happens in exactly one place.

15.5 Information and Knowledge Sharing

Intelligence Community Directive Number 501, Discovery and Dissemination or Retrieval of Information Within the Intelligence Community, was signed by the DNI on January 21, 2009. Designed to “foster an enduring culture of responsible sharing within an integrated IC,” the document introduced the term “responsibility to provide” and created a complex relationship with the traditional mantra of “need to know”.

It directed that all authorized information be made available and discoverable “by automated means” and encouraged data tagging of mission-specific information. ICD 501 also defined “stewards” for collection and analytic production as:

An appropriately cleared employee of an IC element, who is a senior official, designated by the head of that IC element to represent the [collection/analytic] activity that the IC element is authorized by law or executive order to conduct, and to make determinations regarding the dissemination to or the retrieval by authorized IC personnel of [information collected/analysis produced] by that activity [25].

With a focus on improving discovery and dissemination of information, rather than protecting or hoarding information from authorized users, data stewards gradually replace data owners in this new construct. The data doesn’t belong to a person or agency. It belongs to the intelligence community. When applied, this change in perspective has a dramatic impact on the perspectives of information.

Prusak notes that knowledge “clumps” in groups and connectivity of individuals into groups and networks wins over knowledge capture.

Organizations that promote the development of social networks and the free exchange of information witness the establishment of self-organizing knowledge groups. Bahra says that there are three main conditions to assist in knowledge sharing [29, p. 56]:

• Reciprocity: One helps a colleague, thinking that he or she will receive valuable knowledge in return (even in the future).
• Reputation: Reputation, or respect for one’s work and expertise, is power, especially in the intelligence community.
• Altruism: Self-gratification and a passion or interest about a topic.

These three factors contribute to the simplest yet most powerful transformative sharing concepts in the intelligence community.

15.6 Wikis, Blogs, Chat, and Sharing

The historical compartmented nature of the intelligence community and its “need to know” policy is often cited as an impetus to information sharing.

Andrus’s essay – “The Wiki and the Blog: Toward a Complex Adaptive Intelligence Community,” which postulated that “the intelligence community must be able to dynamically reinvent itself by continuously learning and adapting as the national security environment changes”- won the intelligence community’s Galileo Award and was partially responsible for the start-up of a classified Wiki based on the platform and structure of Wikipedia called Intellipedia [31]. Shortly after its launch, the tool was used to write a high-level intelligence assessment on Nigeria. Thomas Fingar, the former deputy director of National Intelligence for Analysis (DDNI/A) cited Intellipedia’s success in rapidly characterizing Iraqi insurgents’ use of chlorine in improvised explosive devices highlighting the lack of bureaucracy inherent in the self-organized model.

While Intellipedia is the primary source for collaborative, semiformalized information sharing on standing and emergent intelligence topics, most analysts collaborate informally using a combination of chat rooms, Microsoft SharePoint sites, and person-to-person chat messages.

Because the ABI tradecraft reduces the focus on producing static intelligence products to fill a queue, ABI analysts tend to collaborate and share around in-work intelligence products. These include knowledge graphs on adversary patterns of life, shape file databases, and other in-work depictions that are often not suitable as finished intelligence products. In fact, the notion of “ABI products” is a source of continued consternation as standards bodies attempt to define what is new and different about ABI products, as well as how to depict the dynamics of human patterns of life on what is often a static Powerpoint chart.
Managers like reports and charts as a metric of analytic output because the total number of reports is easy to measure; however, management begins to question the utility of “snapping a chalk line” on an unfinished pattern-of-life analysis just to document a “product.” Increasingly, interactive products that use dynamic maps and charts are used for spatial storytelling. Despite all the resources allocated to glitzy multimedia products and animated movies, these products are rarely used because they are time-consuming, expensive, and usually late to need

15.8 Summary

Knowledge management is a crucial enabler for ABI because tacit and explicit knowledge about activities, patterns, and entities must be discovered and correlated across multiple disparate holdings to enable the principle of data neutrality. Increasingly, new technologies like graph data stores, recommendation engines, provenance tracing, wikis, and blogs, contribute to the advancement of ABI because they enhance knowledge discovery and understanding. Chapter 16 describes approaches that leverage these types of knowledge to formulate models to test alternative hypotheses and explore what might happen.

16

Anticipatory Intelligence

After reading chapters on persistent surveillance, big data processing, automated extraction of activities, analysis, and knowledge management, you might be thinking that if we could just automate the steps of the workflow, intelligence problems would solve themselves. Nothing could be further from the truth. In some circles, ABI has been conflated with predictive analytics and automated sensor cross-cueing, but the real power of the ABI method is in producing anticipatory intelligence. Anticipation is about considering alternative futures and what might happen…not what will happen. This chapter describes technologies and methods for capturing knowledge to facilitate exploratory modeling, “what-if ” trades, and evaluation of alternative hypotheses.

16.1 Introduction to Anticipatory Intelligence

Anticipatory intelligence is a systemic way of thinking about the future that focuses our long range foveal and peripheral vision on emerging conditions, trends, and threats to national security. Anticipation is not about prediction or clairvoyance. It is about considering a space of potential alternatives and informing decision-makers on their likelihood and consequence. Modeling and simulation approaches aggregate knowledge on topics, indicators, trends, drivers, and outcomes into a theoretically sound, analytically valid framework for exploring alternatives and driving decision advantage. This chapter provides a survey of the voluminous approaches for translating data and knowledge into models, as well as various approaches for executing those models in a data-driven environment to produce traceable, valid, supportable assessments based on analytic relationships, validated models, and real-world data.

16.1.1 Prediction, Forecasting, and Anticipation

In a quote usually attributed to physicist Neils Bohr or baseball player Yogi Berra, “Prediction is hard, especially about the future.” The terms “prediction,” “forecasting,” and “anticipation” are often used interchangeably but represent significantly different perspectives, especially when applied to the domain of intelligence analysis.
A prediction is a statement of what will or is likely to happen in the future. Usually, predictions are given as a statement of fact: “in the future, we will all have flying cars.” This statement lacks any estimate of likelihood, timing, confidence, or other factors that would justify the prediction.

Forecasts, though usually used synonymously with predictions, are often accompanied by quantification and justification. Meteorologists generate forecasts: “There is an 80% chance of rain in your area tomorrow.”

Forecasts of the distant future are usually inaccurate because underling models fail to account for disruptive and nonlinear effects.

While predictions are generated by pundits and crystal ball-waving fortune tellers, forecasts are generated analytically based on models, assumptions, observations, and other data.

Anticipation is the act of expecting or foreseeing something, usually with presentiment or foreknowledge. While predictions postulate the outcome with stated or implied certainty and forecasts provide a mathematical estimate of a given outcome, anticipation refers to the broad ability to consider alternative outcomes. Anticipatory analysis combines forecasts, institutional knowledge (see Chapter 15), and other modeling approaches to generate a series of “what if ” scenarios. The important distinction between prediction/forecasting and anticipation is that anticipation identifies what may happen. Anticipatory analysis sometimes allows analysis and quantification of possible causes. Sections 16.2–16.6 in this chapter will describe modeling approaches and their advantages and disadvantages for anticipatory intelligence analysts.

16.2 Modeling for Anticipatory Intelligence

Anticipatory intelligence is based on models. Models, sometimes called “analytic models,” provide a simplified explanation of how some aspect of the real world works to yield insight. Models may be tacit or explicit. Tacit models are based on knowledge and experience.
They exist in the analyst’s head and are executed routinely during decision processes whether the analyst is aware of it or not. Explicit models are documented using a modeling language, diagram, description, or other relationship.

16.2.1 Models and Modeling

The most basic approach is to construct a model based on relevant context and use the model to understand or visualize a result. Another approach, comparative modeling (2), uses multiple models with the same contextual input data to provide a common output. This approach is useful for exploring alternative hypotheses or examining multiple perspectives to anticipate what may happen and why. A third approach called model aggregation combines multiple models to allow for complex interactions. The third approach has been applied to human socio-cultural behavior (HSCB) modeling and human domain analytics on multiple programs over the past 20 years with mixed results (see Section 16.6). Human activities and behaviors and their ensuing complexity, nonlinearity, and unpredictability, represent the most significant modeling challenge facing the community today.

16.2.2 Descriptive Versus Anticipatory/Predictive Models

Descriptive models present the salient features of data, relationships, or processes. They may be as simple as a diagram on a white board or as complex as a wiring diagram for a distributed computer networks. Analysts often use descriptive models to identify the key attributes of a process (or a series of activities).

16.3 Machine Learning, Data Mining, and Statistical Models

Machine learning traces its origins to the 17th century where German mathematician Leibnitz began postulating formal mathematical relationships to represent human logic. In the 19th century, George Boole developed a series of relations for deductive processes (now called Boolean logic). By the mid 20th century, English mathematician Alan Turing and John McCarthy of MIT began experimenting with “intelligent machines,” and the term “artificial intelligence” was coined. Machine learning is a subfield of artificial intelligence concerned with the development of algorithms, models, and techniques that allow machines to “learn.”

Natural intelligence, often thought of as the ability to reason, is a representation of logic, rules, and models. Humans are adept pattern-matchers. Memory is a type of historical cognitive recall. Although the exact mechanisms for “learning” in the human brain are not completely understood, in many cases it is possible to develop algorithms that mimic human thought and reasoning processes. Many machine-learning techniques including rule-based learning, case-based learning, and unsupervised learning are based on our understanding of these cognitive processes.

16.3.1 Rule-Based Learning

In rule-based learning, a series of known logical rules are encoded directly as an algorithm. This technique is best suited for directly translating a descriptive model into executable code.

Rule-based learning is the most straightforward way to encode knowledge into an executable model, but it is also the most brittle for obvious reasons. The model can only represent the phenomena for which rules have been encoded. This approach significantly reinforces traditional inductive-based analytic approaches and is highly susceptible to surprise.

16.3.2 Case-Based Learning

Another popular approach is called case-based learning. This technique presents positive and negative situations for which a model is learned. The learning process is called “training”; the model and the data used are referred to as the “training set.”

This learning approach is useful when the cases—and their corresponding observables, signatures, and proxies —can be identified a priori.

In the case of counterterrorism, many terrorists participate in normal activities and look like any other normal individual in that culture. The distinguishing characteristics that describe “a terrorist” are few, making it very difficult to train automatic detection and classification algorithms. Furthermore, when adversaries practice denial and deception, a common technique is to mimic the distinguishing characteristics of the negative examples so as to hide in the noise. This approach is also brittle because the model can only interpret cases for which it has positive and negative examples.

16.3.3 Unsupervised Learning

Another popular and widely employed approach is that of unsupervised learning where a model is generated based upon a data set with little or no “tuning” from a human operator. This technique is also sometimes called data mining because the algorithm literally identifies “nuggets of gold” in an otherwise unseemly heap of slag.

This approach is based on the premise that the computational elements themselves are very simple, like the neurons in the human brain. Complex behaviors arise from connections between the neurons that are modeled as an entangled web of relationships that represent signals and patterns.

While many of the formal cognitive processes for human sensemaking are not easily documented, sensemaking is the process by which humans constantly weigh evidence, match patterns, postulate outcomes, and infer between missing information. Although the term analysis is widely used to refer to the job of intelligence analysts, many sensemaking tasks are a form of synthesis: the process of integrating information together to enhance understanding.

In 2014, demonstrating “cognitive cooking” technology, a specially trained version of WATSON created “Bengali Butternut BBQ Sauce,” a delicious combination of butternut squash, white wine, dates, Thai chilies, tamarind, and more

Artificial intelligence, data mining, and statistically created models are generally good for describing known phenomenon and forecasting outcomes (calculated responses) within a trained model space but are unsuitable for extrapolating outside the training set. Models must be used where appropriate, and while computational techniques for automated sensemaking have been proposed, many contemporary methods are limited to the processing, evaluation, and subsequent reaction to increasingly complex rule sets.

16.4 Rule Sets and Event-Driven Architectures

An emerging software design paradigm, event-driven architectures, are “a methodology for designing and implementing applications and systems in which events transmit between loosely coupled software components and services”.

Events are defined as a change in state, which could represent a change in the state of an object, a data element, or an entire system. An event-driven architecture applies to distributed, loosely coupled systems that require asynchronous processing (the data arrives at different times and is needed at different times). Three types of event processing are typically considered:

• Simple event processing (SEP): The system response to a change in condition and a downstream action is initiated (e.g., when new data arrives in the database, process it to extract coordinates).

• Event stream processing (ESP): A stream of events is filtered to recognize notable events that match a filter and initiate an action (e.g., when this type of signature is detected, alert the commander).

Complex event processing (CEP): Predefined rule sets recognize a combination of simple events, occurring in different ways and different times, and cause a downstream action to occur.

16.4.1 Event Processing Engines

According to developer KEYW, JEMA is widely accepted across the intelligence community as “a visual analytic model authoring technology, which provides drag-and-drop authoring of multi-INT, multi-discipline analytics in an online collaborative space” [11]. Because JEMA automates data gathering, filtering, and processing, analysts shift the focus of their time from search to analysis.

Many companies use simple rule processing for anomaly detection, notably credit card companies whose fraud detections combine simple event processing and event stream detection. Alerts are triggered on anomalous behaviors.

16.4.2 Simple Event Processing: Geofencing, Watchboxes, and Tripwires

Another type of “simple” event processing highly relevant to spatiotemporal analysis is a technique known as geofencing.

A dynamic area of interest is a watchbox that moves with an object. In the Ebola tracking example, the dynamic areas of interest are centered on each tagged ship with a user-defined radius. This allows the user to identify when two ships come within close proximity or when the ship passes near a geographic feature like a shoreline or port, providing warning of potential docking activities.

To facilitate the monitoring of thousands of objects, rules can be visualized on a watchboard that uses colors, shapes, and other indicators to highlight rule activation and other triggers. A unique feature of LUX is the timeline view, which provides an interactive visualization of patterns across individual rules or sets of rules as shown in Figure 16.6 and how rules and triggers change over time.

A dynamic area of interest is a watchbox that moves with an object. In the Ebola tracking example, the dynamic areas of interest are centered on each tagged ship with a user-defined radius. This allows the user to identify when two ships come within close proximity or when the ship passes near a geographic feature like a shoreline or port, providing warning of potential docking activities.

To facilitate the monitoring of thousands of objects, rules can be visualized on a watchboard that uses colors, shapes, and other indicators to highlight rule activation and other triggers. A unique feature of LUX is the timeline view, which provides an interactive visualization of patterns across individual rules or sets of rules as shown in Figure 16.6 and how rules and triggers change over time.

16.4.4 Tipping and Cueing

The original USD(I) definition for ABI referred to “analysis and subsequent collection” and many models for ABI describe the need for “nonlinear TCPED” where the intelligence cycle is dynamic to respond to changing intelligence needs. This desire has often been restated as the need for automated collection in response to detected activities, or “automated tipping and cueing.”

Although the terms are usually used synonymously, a tip is the generation of an actionable report or notification of an event of interest. When tips are sent to human operators/analysts, they are usually called alerts. A cue is a more related and specific message sent to a collection system as the result of a tip. Automated tipping and cueing systems rely on tip/cue rules that map generated tips to the subsequent collection that requires cueing.

Numerous community leaders have highlighted the importance of tipping and cueing to reduce operational timelines and optimize multi-INT collection.

Although many intelligence community programs conflate “ABI” with tipping and cueing, the latter is an inductive process that is more appropriately paired with monitoring and warning for known signatures after the ABI methods have been used to identify new behaviors from an otherwise innocuous set of data. In the case of modeling, remember that models only respond to the rules for which they are programmed; therefore tipping and cueing solutions may improve efficiency but may inhibit discovery by reinforcing the need to monitor known places for known signatures instead of seeking the unknown unknowns.

16.5 Exploratory Models

Data mining and statistical learning approaches create models of behaviors and phenomenon, but how are these models executed to gain insight. Exploratory modeling is a modeling technique used to gain a broad understanding of a problem domain, key drivers, and uncertainties before going into details

16.5.1 Basic Exploratory Modeling Techniques

There are many techniques for exploratory modeling. Some of the most popular include Bayes nets, Markov chains, Petri nets, and discrete event simulation.

Discrete event simulation (DES) is another state transition and process modeling technique that models a system as a series of discrete events in time. In contrast to continuously executing simulations (see agent-based modeling and system dynamics in Sections 16.5.3 and 16.5.4), the system state is determined by activities that happen over user-defined time slices. Because events can cross multiple time slices, every time slice does not have to be simulated.

16.5.2 Advanced Exploratory Modeling Techniques

There is also a class of modeling techniques for studying emergent behaviors and modeling of complex systems with a focus on discovery emerged due to shortfalls in other modeling techniques.

16.5.3 ABM

ABM is an approach that develops complex behaviors by aggregating the actions and interactions of relatively simple “agents.” According to ABM pioneer Andrew Ilachinski, “agent-based simulations of complex adaptive systems are predicated on the idea that the global behavior of a complex system derives entirely from the low-level interactions among its constituent agents” [23]. Human operators define the goals of agents. In simulation, agents make decisions to optimize their goals based on perceptions of the environment. The dynamics of multiple, interacting agents often lead to interesting and complicated emergent behaviors.

16.5.4 System Dynamics Model

System dynamics is another popular approach to complex systems modeling that defines relationships between variables in terms of stocks and flows. Developed by MIT professor Jay Forrester in the 1950s, system dynamics was concerned with studying complexities in industrial and business processes.

By the early 2000s, system dynamics emerged as a popular technique to model the human domain and its related complexities. Between 2007 and 2009, researchers from MIT and other firms worked with IARPA on the Pro-Active Intelligence (PAINT) program “to develop computational social science models to study and understand the dynamics of complex intelligence targets for nefarious activity” [26]. Researchers used system dynamics to examine possible drivers of nefarious technology development (e.g., weapons of mass destruction) and critical pathways and flows including natural resources, precursor processes, and intellectual talent.

Another aspect of the PAINT program was the design of probes. Since many of the indicators of complex processes are not directly observable, PAINT examined input activities like sanctions that may prompt the adversary to do something that is observable. This application of the system dynamics modeling technique is appropriate for anticipatory analytics because it allows analysts to test multiple hypotheses rapidly in a surrogate environment. In one of the examples cited by MIT researchers, analysts examined a probe targeted at human resources where the simulators examined potential impacts of hiring away key personnel resources with specialized skills. This type of interactive, anticipatory analysis lets teams of analysts examine potential impacts of different courses of action.
System dynamics models have the additional property that the descriptive model of the system also serves as the executable model when time constants and influence factors are added to the representation. The technique suffers from several shortcomings including the difficulty in establishing transition coefficients, the impossibility of model validation, and the inability to reliably account for known and unknown external influences on each factor.

16.6 Model Aggregation

Analysts can improve the fidelity of anticipatory modeling by combining the results from multiple models. One framework for composing multiple models is the multiple information model synthesis architecture (MIMOSA), developed by Information Innovators. MIMOSA “aided one intelligence center to increase their target detection rate by 500% using just 30% of the resources previously tasked with detection freeing up personnel to focus more on analysis” [29]. MIMOSA uses target sets (positive examples of target geospatial regions) to calibrate models for geospatial search criteria like proximity to geographic features, foundation GEOINT, and other spatial relationships. Merging multiple models, the software aggregates the salient features of each model to reduce false alarm rate and improve the predictive power of the combined model.

An approach for multiresolution modeling of sociocultural dynamics was developed by DARPA for the COMPOEX program in 2007. COMPOEX provided multiple types of agent-based, system dynamics, and other models in a variable resolution framework that allowed military planners to swap different models to test multiple courses of action across a range of problems. A summary of the complex modeling environment is shown in Figure 16.10. COMPOEX includes modeling paradigms such as concept maps, social networks, influence diagrams, differential equations, causal models, Bayes networks, Petri nets, dynamic system models, event-based simulation, and agent-based models [31]. Another feature of COMPOEX was a graphical scenario planning tool that allowed analysts to postulate possible courses of action, as shown in Figure 16.11.

Each of the courses of action in Figure 16.11 was linked to one or more of the models across the sociocultural behavior analysis hierarchy, abstracting the complexity of models and their interactions away from analysts, planners, and decision makers. The tool forced models at various resolutions to interact (Figure 16.10) to stimulate emergent dynamics so planners could explore plausible alternatives and resultant courses of action.

Objects can usually be modeled using physics-based or process models. However, an important tenet of ABI is that these objects are operated by someone (who). Knowing something about the “who” provides important insights into the anticipated behavior of those objects.

16.7 The Wisdom of Crowds

Most of the anticipatory analytic techniques in this chapter refer to analytic, algorithmic, or simulation-based models that exist as computational processes; however, it is important to mention a final and increasingly popular type of modeling approach based on human input and subjective judgment.

James Surowiecki, author of The Wisdom of Crowds, popularized the concept of information aggregation that surprisingly leads to better decisions than those made by any single member of the group. It offers anecdotes to illustrate the argument, which essentially acts as a counterpoint to the maligned concept of “groupthink.” Surowiecki differentiates crowd wisdom from group think by identifying four criteria for a “wise crowd” [33]:

• Diversity of opinion: Each person should have private information even if it’s just an eccentric interpretation of known facts.
• Independence: People’s opinions aren’t determined by the opinions of those around them.
• Decentralization: People are able to specialize and draw on local knowledge
• Aggregation: Some mechanism exists for turning private judgments into a collective decision

A related DARPA program called FutureMAP was canceled in 2003 amidst congressional criticism regarding “terrorism betting parlors”; however, the innovative idea was reviewed in depth by Yeh in Studies in Intelligence in 2006 [36]. Yeh found that prediction markets could be used to quantify uncertainty and eliminate ambiguity around certain types of judgments. George Mason University launched IARPA-funded SciCast, which forecasts scientific and technical advancements.

16.8 Shortcomings of Model-Based Anticipatory Analytics

By now, you may be experiencing frustration that none of the modeling techniques in this chapter are the silver bullet for all anticipatory analytic problems. The challenges and shortcomings for anticipatory modeling are voluminous.

The major shortcoming of all models is that they can’t do what they aren’t told. Rule-based models are limited to user-defined rules, and statistically generated models are limited to the provided data. As we have noted on multiple occasions, intelligence data is undersampled, incomplete, intermittent, error-prone, cluttered, and deceptive. All of these are ill-suited for turnkey modeling.

A combination of many types of modeling approaches is needed to perform accurate, justifiable, broad based anticipatory analysis. Each of these needs validation, but model validation, especially in the field of intelligence, is a major challenge. We seldom have “truth” data. The intelligence problem and its underlying assumptions are constantly evolving as are attempts to solve it, a primary criterion for what Rittel and Weber call “wicked problems” [39].

Handcrafting models is slow, and a high level of skill is required to use many modeling tools. Furthermore, most of these tools do not allow easy sharing across other tools or across modeling approaches, complicating the ability to share and compare models. This challenge is exacerbated by the distributed nature of knowledge in the intelligence community.

When models exist, analysts depend heavily on “the model.” Sometimes it has been right in the past. Perhaps it was created by a legendary peer. Maybe there’s no suitable alternative. Overdependence on models and extrapolation of models into regions where they have not been validated leads to improper conclusions.

A final note: weather forecasting relies on physics-based models with thousands of real-time data feeds, decades of forensic data, ground truth, validated physics-based models, one-of-a-kind supercomputers, and a highly trained cadre of scientists, networked to share information and collaborate. It is perhaps the most modeled problem in the world. Yet weather “predictions” are often wrong, or at minimum imprecise. What hope is there for predicting human behaviors based on a few spurious observations?

16.9 Modeling in ABI

In the early days of ABI, analysts in Iraq and Afghanistan lacked the tools to formally model activities. As analysts assembled data in an area, they developed a tacit mental model of what was normal. Their geodatabases representing a pattern of life constituted a type of model of what was known. The gaps in those databases represented the unknown. Their internal rules for how to correlate data, separating the possibly relevant from the certainly irrelevant composed part of a workflow model as did their specific method for data conditioning and georeferencing.

However, relying entirely on human analysts to understand increasingly complex problem sets also presents challenges. Studies have shown that experts (including intelligence analysts) are subject to biases due to a number of factors like perception, evaluation, omission, availability, anchoring, groupthink, and others.

Analytic models that treat facts and relationships explicitly provide a counterbalance to inherent biases in decision-making. Models can also quickly process large amounts of data and multiple scenarios without getting tired, bored, or discounting information.

Current efforts to scale ABI across the community focus heavily on activity, process, and object modeling as this standardization is believed to enhance information sharing and collaboration. Algorithmic approaches like JEMA, MIMOSA, PAINT, and LUX have been introduced to worldwide users.

16.10 Summary

Models provide a mechanism for integrating information and exploring alternatives, improving an analyst’s ability to discover the unknown. However, if models can’t be validated, executed on sparse data, or trusted to solve intelligence problems, can any of them be trusted? If “all models are wrong,” in the high-stakes business of intelligence analysis, are any of them useful?

Model creation requires a multidisciplinary, multifaceted, multi-intelligence approach to data management, analysis, visualization, statistics, correlation, and knowledge management. The best model builders and analysts discover that it’s not the model itself that enables anticipation. The exercise in data gathering, hypothesis testing, relationship construction, code generation, assumption definition, and exploration trained the analyst. To build a good model, the analyst had to consider multiple ways something might happen—to consider the probability and consequence of different outcomes. The data landscape, adversary courses of action, complex relationships, and possible causes are all discovered in the act of developing a valid model. Surprisingly, when many analysts set out to create a model they end by realizing they became one.

17

ABI in Policing

Patrick Biltgen and Sarah Hank

Law enforcement and policing shares many common techniques with intelligence analysis. Since 9/11, police departments have implemented a number of tools and methods from the discipline of intelligence to enhance the depth and breadth of analysis.

17.1 The Future of Policing

Although precise prediction of future events is impossible, there is a growing movement among police departments worldwide to leverage the power of spatiotemporal analytics and persistent surveillance to resolve entities committing crimes, understand patterns and trends, adapt to changing criminal tactics, and better allocate resources to the areas of greatest need. This chapter describes the integration of intelligence and policing— popularly termed “intelligence-led policing”—and its evolution over the past 35 years.

17.2 Intelligence-Led Policing: An Introduction

The term “intelligence-led policing” traces its origins to the 1980s at the Kent Constabulatory in Great Britain. Faced with a sharp increase in property crimes and vehicle thefts, the department struggled with how to allocate officers amidst declining budgets [2, p. 144]. The department developed a two-pronged approach to address this constraint. First, it freed up resources so detectives had more time for analysis by prioritizing service calls to the most serious offenses and referring lower priority calls to other agencies. Second, through data analysis it discovered that “small numbers of chronic offenders were responsible for many incidents and that patterns also include repeat victims and target locations”.

The focus of analysis and problem solving is to analyze and understand the influencers of crime using techniques like statistical analysis, crime mapping, and network analysis. Police presence is optimized to deter and control these influencers while simultaneously collecting additional information to enhance analysis and problem solving. A technique for optimizing police presence is described in Section 17.5.

Intelligence-led policing applies analysis and problem solving techniques to optimize resource allocation in the form of focused presence and patrols. Accurate dissemination of intelligence, continuous improvement, and focused synchronized deployment against crime are other key elements of the method.

17.2.1 Statistical Analysis and CompStat

The concept of ILP was implemented in the New York City police department in the 1980s by police commissioner William Bratton and Jack Maple. Using a computational statistics approach called CompStat, “crime statistics are collected, computerized, mapped and disseminated quickly” [5]. Wall-sized “charts of the future” mapped every element of the New York transit system. Crimes were mapped against the spatial nodes and trends were examined.

Though its methods are controversial, CompStat is widely credited with a significant reduction in crime in New York. The method has since been implemented at other major cities in the United States with a similar result, and the methods and techniques for statistical analysis of crimes is standard in criminology curricula.

17.2.2 Routine Activities Theory

A central tenet of ILP is based on Cohen and Felson’s routine activities theory, which is the general principle that human activities tend to follow predictable patterns in time and space. In the case of crime, the location for these events is defined by the influencers of crime(Figure 17.2). Koper provides exposition of these influencers: “crime does not occur randomly but rather is produced by the convergence in time and space of motivated offenders, suitable targets, and the absence of capable guardians.”

17.3 Crime Mapping

Crime mapping is a geospatial analysis technique that geolocates and categorizes crimes to detect hot spots, understand the underlying trends and patterns, and develop courses of action. Crime hot spots are a type of spatial anomaly that may be characterized at the address, block, block cluster, ward, county, geographic region, or state level—the precision of geolocation and the aggregation depends on the area of interest and the question being asked.

17.3.1 Standardized Reporting Enables Crime Mapping

In 1930, Congress enacted Title 28, Section 534 of the U.S. code, authorizing the Attorney General and subsequently the FBI to standardize and gather crime information [6]. The FBI implemented the Uniform Crime Reporting Handbook, standardizing and normalizing the methods, procedures, and data formats for documenting criminal activity. This type of data conditioning enables information sharing and pattern analysis by ensuring consistent reporting standards across jurisdictions.

17.3.2 Spatial and Temporal Analysis of Patterns

Visualizing each observation as a dot at the city or regional level is rarely informative. For example, in the map in Figure 17.3, discerning a meaningful trend requires extensive data filtering by time of day, type of crime, and other criteria. One technique that is useful to understand trends and patterns is the aggregation of individual crimes into spatial regions.

Mapping aggregated crime data by census tract reveals that the rate of violent crime does not necessarily relate to quadrants, but rather to natural geographic barriers such as parks and rivers. Other geographic markers like landmarks, streets, and historical places may also act as historical anchors for citizens’ perspectives on crime.

Another advantage of aggregating data by area using a GIS is the ability to visualize change over time.

17.4 Unraveling the Network

Understanding hot spots and localizing the places where crimes tend to occur is only part of the story, and reducing crimes around hot spots is only a treatment of the symptoms rather than the cause of the problem. Crime mapping and intelligence-led policing focus on the ABI principles of collecting, characterizing, and locating activities and transactions. Unfortunately, these techniques alone are insufficient to provide entity resolution, identify and locate the actors and entities conducting activities and transactions, and identify and locate networks of actors. These techniques are generally a reactive, sustaining approach to managing crime. The next level of analysis gets to the root cause of crime to go after the heart of the network to resolve entities, understand their relationships, and proactively attack the seams of the network.

The Los Angeles Police Department’s Real-Time Analysis and Critical Response (RACR) division is a state-of-the-art, network enabled analysis cell that uses big data to solve crimes. Police vehicles equipped with roof-mounted license plate readers provide roving wide-area persistent surveillance by vacuuming up geotagged vehicle location data as they patrol the streets.

One of the tools used by analysts in the RACR is made by Palo Alto-based Palantir Technologies. Named after the all-seeing stones in J. R. R. Tolkien’s Lord of the Rings, Palantir is a data fusion platform that provides a clean, coherent abstraction on top of different types of data that all describe the same real world problem”. Palantir enables “data integration, search and discovery, knowledge management, secure collaboration, and algorithmic analysis across a wide variety of data sources”. Using advanced artificial intelligence algorithms—coupled with an easy-to-use graphical interface—Palantir helps trained investigators identify connections between disparate databases to rapidly discover links between people.
Before Palantir was implemented, analysts missed these connections because field interview (FI) data, department of motor vehicles data, and automated license plate reader data was all held in separate databases. The department also lacked situational awareness about where their patrol cars were and how they were responding to requests for help. Palantir integrated analytic capabilities like “geospatial search, trend analysis, link charts, timelines, and histograms” to help officers find, visualize, and share data in near-real time.

17.5 Predictive Policing

Techniques like crime mapping, intelligence-led policing, and network analysis, when used together, enable all five principles of ABI and move toward the Minority Report nirvana described at the beginning of the chapter. This approach has been popularized as “predictive policing.”

Although some critics have questioned the validity of PredPol’s predictions, “during a four-month trial in Kent [UK], 8.5% of all street crime occurred within PredPol’s pink boxes…predictions from police analysts scored only 5%”

18

ABI and the D.C. Beltway Sniper

18.5 Data Neutrality

Any piece of evidence may solve a crime. This is a well-known maxim within criminal cases and is another way of stating the ABI pillar of data neutrality. Investigators rarely judge that one piece of evidence is more important to a case than another with equal pedigree. Evidence is evidence. Coupled with the concept of data neutrality, crime scene processing is essentially a processes of incidental collection. When a crime scene is processed, investigators might know what they are looking for (a spent casing from a rifle) but may discover objects they were not looking for (an extortion note from a killer). Crime scene specialists enter a crime scene with an open mind and collect everything available. They generally make no value judgment on the findings during collection nor do they discard any evidence, for who knows what piece of detritus might be fundamental to building a case.

The lesson learned here, which is identical to the lesson learned within the ABI community, is to collect and keep everything; one never knows if and when it will be important.

18.6 Summary

The horrific events that comprise the D.C. snipers serial killing spree makes an illustrative case study for the application of the ABI pillars. By examining the sequence of events and the analysis that was performed, the following conclusions can be drawn. First, georeferencing all data would have improved understanding of the data and provided context. Unfortunately, the means to do that did not exist at the time. Second, integrating before exploitation might have prevented law enforcement from erroneously tracking and stopping white cargo vans. Again the tools to do this integration do not appear to have existed in 2002.

Interestingly, sequence neutrality and data neutrality were applied to great effect. Once a caller tied two separate crimes together, law enforcement was able to use all the information collected in the past to solve the current crime.

19

Analyzing Transactions in a Network

William Raetz

One of the key differences in the shift from target-based intelligence to ABI is that targets of interest become the output of deductive, geospatial, and relational analysis of activities and transactions. As RAND’s Gregory Treverton noted in 2011, imagery analysts “used to look for things and know what we were looking for. If we saw a Soviet T-72 tank, we knew we’d find a number of its brethren nearby. Now…we’re looking for activities or transactions. And we don’t know what we’re looking for” [1, p. ix]. This chapter demonstrates deductive and relational analysis using simulated activities and transactions, providing a real-world application for entity resolution and the discovery of unknowns.

19.1 Analyzing Transactions with Graph Analytics

Graph analytics—derived from the discrete mathematical discipline of graph theory—is a technique for examining the relationship between data using pairwise relationships. Numerous algorithms and visualization tools for graph analytics have proliferated over the past 15 years. This example demonstrates how simple geospatial and relational analysis tools can be used to understand complex patterns of movement—the activities and transactions conducted by entities—over a city-sized area. This scenario involves an ABI analyst looking for a small “red network” of terrorists hiding among a civilian population.
Hidden within the normal patterns of the 4,623 entities is a malicious network. The purpose of this exercise is to analyze the data using ABI principles to unravel this network: to discover the signal hidden in the noise of everyday life.

The concepts of “signal” and “noise,” which have their origin in signal processing and electrical engineering, are central to the analysis of nefarious actors that operate in the open but blend into the background. Signal is the information relevant to an analyst contained in the data; noise is everything else. For instance, a “red,” or target, network’s signal might consist of activity unique to achieving their aims; unusual purchases, a break in routine, or gatherings at unusual times of day are all possible examples of signal.

Criminal and terrorist networks have become adept at masking their signal—the “abnormal” activity necessary to achieve their aims—in the noise of the general population’s activity. To increase the signal-to-noise ratio (SNR), an analyst must determine inductively or deductively what types of activities constitute the signal. In a dynamic, noisy, densely populated environment, this is difficult unless the analyst can narrow the search space by choosing a relevant area of interest, choosing a time period when enemy activity is likely to be greater, or beginning with known watch listed entities as the seeds for geochaining or geospatial network analysis.

19.2 Discerning the Anomalous

Separating out the signal from the background noise is as much art as science. As an analyst becomes more familiar with a population or area, “normal,” or background, behavior becomes inherent through tacit model building and hypothesis testing.

The goals and structure of the target group define abnormal activity. For example, the activity required to build and deploy an improvised explosive device (IED) present in the example data set will be very different from money laundering. A network whose aim is to build and deploy an IED may consist of bomb makers, procurers, security, and leadership within a small geographic area. Knowing the general goals and structure of the target group will help identify the types of activities that constitute signal.

Nondiscrete locations where many people meet will have a more significant activity signature. The analyst will also have to consider how entities move between these locations and discrete locations that have a weaker signal but contribute to a greater probability of resolving a unique entity of interest. An abnormal pattern of activity around these discrete locations is the initial signal the analyst is looking for.
At this point, the analyst has a hypothesis, a general plan based on his knowledge of the key types of locations a terrorist network requires. He will search for locations that look like safe house and warehouse locations based on events and transactions. When the field has been narrowed to a reasonable set of possible discrete locations, he will initiate forensic backtracking of transactions to identify additional locations and compile a rough list of red network members from the participating entities. This is an implementation of the “where-who” concept from Chapter 5.

19.3 Becoming Familiar with the Data Set

After receiving the data and the intelligence goal, the analyst’s first step is to familiarize himself with the data. This will help inform what processing and analytic tasks are possible; a sparse data set might require more sophistication, while a very large one may require additional processing power. In this case, because the available data is synthetic, the data is presented in three clean comma-separated value (.csv) files (Table 19.1). Analysts typically receive multiple files that may come from different sources or may be collected/created at different times.

It is important to note that the activity patterns for a location represent a pattern-of-life element for the entities in that location and for participating entities. The pattern-of-life element provides some hint to the norms in the city. It may allow the analyst to classify a building based on the times and types of activities and transactions (Section 19.4.1) and to identify locations that deviate from these cultural norms. Deducing why locations deviate from the norm—and whether these deviations are significant—is part of the analytic art of separating signal from background noise.

19.4.1 Method: Location Classification

One of the most technically complex methods of finding suspicious locations is to interpret these activity patterns through a series of rules to determine which are “typical” of a certain location type. For instance, if a location exhibits a very typical workplace pattern, as evidenced by its distinctive double peak, it can be eliminated from consideration, based on the assumption that the terrorist network prefers to avoid conducting activities at the busiest times and locations.

Because there is a distinctive and statistically significant difference between discrete and nondiscrete locations using the average time distance technique, the analyst can use the average time between activities to identify probable home locations. He calculates the average time between activities for every available uncategorized location and treats all the locations with an average greater than three as single-family home locations.

19.4.2 Method: Average Time Distance

The method outlined in Section 19.4.1 is an accurate but cautious way of using activity patterns to classify location types. In order to get a different perspective on these locations, instead of looking at the peaks of activity patterns, the analyst will next look at the average time between activities.

19.4.3 Method: Activity Volume

The first steps of the analysis process filtered out busy workplaces (nondiscrete locations) and single-family homes (discrete locations), leaving the analyst with a subset of locations that represent unconventional workplaces and other locations that may function as safe houses or warehouses.

The analyst uses an activity volume filter to remove all of the remaining locations that have many more activities than expected. He also removes all locations with no activities, assuming the red network used a location shortly before its attack.

19.4.4 Activity Tracing

The analyst’s next step is to choose a few of the best candidates for additional collection. If 109 is too many locations to examine in the time required by the customer, he can create a rough prioritization by making a final assumption about the behavior of the red network by assuming they have traveled directly between at least two of their locations.

19.5 Analyzing High-Priority Locations with a Graph

To get a better understanding of how these locations are related, and who may be involved, the analyst creates a network graph of locations, using tracks to infer relationships between locations. The network graph for these locations is presented in Figure 19.7.

19.6 Validation

At this point, the analyst has taken a blank slate and turned a hypothesis into a short list of names and locations.

19.7 Summary

This example demonstrates deductive methods for activity and transaction analysis that reduce the number of possible locations to a much smaller subset using scripting, hypotheses, analyst-derived rules, and graph analysis. To get started, the analyst had to wrestle with the data set to become acquainted with the data and the patterns of life for the area of interest. He formed a series of assumptions about the behavior of the population and tested these by analyzing graphs of activity sliced different ways. Then the analyst implemented a series of filters to reduce the pool of possible locations. Focusing on locations and then resolving entities that participated in activities and transactions—georeferencing to discover—was the only way to triage a very large data set with millions of track points. Because locations have a larger activity signature than individuals in the data set, it is easier to develop and test hypotheses on the activities and transactions around a location and then use this information as a tip for entity-focused graph analytics.

Through a combination of these filters the analyst removed 5,403 out of 5,445 locations. This allowed for highly targeted analysis (and in the real world, subsequent collection). In the finale of the example, two interesting entities were identified based on their relationship to the suspicious locations. In addition to surveilling these locations, these entities and their proxies could be targeted for collection and analysis.

21

Visual Analytics for Pattern-of-Life Analysis

This chapter integrates concepts for visual analytics with the basic principles of georeference to discover to analyze the pattern-of-life of entities based on check-in records from a social network.

It presents several examples of complex visualizations used to graphically understand entity motion and relationships across named locations in Washington, D.C., and the surrounding metro area. The purpose of the exercise is to discover entities with similar patterns of life and cotraveling motion patterns—possibly related entities. The chapter also examines scripting to identify intersecting entities using the R statistical language.

21.1 Applying Visual Analytics to Pattern-of-Life Analysis

Visual analytic techniques provide a mechanism for correlating data and discovering patterns.

21.1.3 Identification of Cotravelers/Pairs in Social Network Data

Visual analytics can be used to identify cotravelers, albeit with great difficulty.

Further drill down (Figure 21.5) reveals 11 simultaneous check-ins, including one three-party simultaneous check-in at a single popular location.

The next logical question—an application of the where-who-where concept discussed in Chapter 5—is “do these three individuals regularly interact?”

21.2 Discovering Paired Entities in a Large Data Set

Visual analytics is a powerful, but often laborious and serendipitous approach to exploring data sets. An alternative approach is to write code that seeks mathematical relations in the data. Often, the best approach is to combine the techniques.

It is very difficult to analyze data with statistical programming languages if the analysts/data scientists do not know what they are looking for. Visual analytic exploration of the data is a good first step to establish hypotheses, rules, and relations that can then be coded and processed in bulk for the full dataset.

Integrating open-source data, the geolocations can be correlated with named locations like the National Zoo and the Verizon Center. Open-source data also tells us that the event at the Verizon Center was a basketball game between the Utah Jazz and Washington Wizards. The pair cotravel only for a single day over the entire data set. We might conclude that this entity is an out-of-town guest. That hypothesis can be tested by returning to the 6.4-million-point worldwide dataset.

User 129395 checked in 122 times and only in Stafford and Alexandria, Virginia, and the District of Columbia. During the day, his or her check-ins are in Alexandria, near Duke St. and Telegraph Rd. (work). In the evenings, he or she can be found in Stafford (home). This is an example of identifying geospatial locations based on the time of day and the pattern-of-life elements present in this self-reported data set.
Note that another user, 190, also checks in at the National Zoo at the same time as the cotraveling pair. We do not know if this entity was cotraveling the entire time and decided to check in at only a single location or if this is an unrelated entity that happened to check in near the cotraveling pair while all three of them were standing next to the lions and tigers exhibit. The full data set finds user 190 all over the world, but his or her pattern of life places him or her most frequently in Denver, Colorado.

And what about the other frequent cotraveler, 37398? The pair coincidentally checked in 10 times over a four-month period, between the hours of 14:00 and 18:00 and 21:00 and 23:59 at the Natural History Museum, Metro Center, the National Gallery of Art, and various shopping centers and restaurants around Stafford, Virginia. We might conclude that this is a family member, child, friend, or significant other.

21.3 Summary

This example demonstrates how a combination of spatial analysis, visual analytics, statistical filtering, and scripting can be combined to understand patterns of life in real “big data” sets; however, conditioning, ingesting, and filtering this data to create a single example took more than 12 hours.

Because this data requires voluntary check-ins at registered locations, it is an example of the sparse data typical of intelligence problems. If the data consisted of beaconed location data from GPS-enabled smartphones, it would be possible to identify multiple overlapping locations.

22

Multi-INT Spatiotemporal Analysis

A 2010 study by OUSD(I) identified “an information domain to combine persistent surveillance data with other INTs with a ubiquitous layer of GEOINT” as one of 16 technology gaps for ABI and human domain analytics [1]. This chapter describes a generic multi-INT spatial, temporal, and relational analysis framework widely adopted by commercial tool vendors to provide interactive, dynamic data integration and analysis to support ABI techniques.

22.1 Overview

ABI analysis tools are increasingly instantiated using web-based, thin client interfaces. Open-source web mapping and advanced analytic code libraries have proliferated.

22.2 Human Interface Basics

A key feature for spatiotemporal-relational analysis tools is the interlinking of multiple views, allowing analysts to quickly understand how data elements are located in time and space, and in relation to other data.

22.2.1 Map View

An “information domain for combining persistent surveillance data on a ubiquitous foundation of GEOINT” drives the central feature of the analysis environment to the map. Spatial searches are performed using a bounding box (1). Events are represented as geolocated dots or symbols (2). Short text descriptions annotate events. Tracks—a type of transaction—are represented as lines with a green dot for starts and a red dot or X for stops (3). Depending on the nature of the key intelligence question (KIQ) or request for information (RFI), the analyst can choose to discover and display full tracks or only starts and stops. Clicking on any event or track point in the map brings up metadata describing the data element. Information like speed and heading accompany track points. Other metadata related to the collecting sensor may be appended to other events and transactions collected from unique sensors. Uncertainty around event position may be represented by a 95% confidence ellipse at the time of collection (4).

22.2.2 Timeline View

Temporal analysis requires a timeline that depicts spatial events and transactions as they occur in time. Many geospatial tools—originally designed to make a static map that integrates layered data at a point in time—have added timelines to allow animation of data or the layering of temporal data upon foundational GEOINT. Most tools instantiate the timeline below the spatial view (Google Earth uses timeline slider in the upper left corner of the window).

22.2.3 Relational View

Relational views are popular in counterfraud and social network analysis tools like Detica NetReveal and Palantir. By integrating a relational view or a graph with the spatiotemporal analysis environment, it is possible to link different spatial locations, events, and transactions by relational properties.

The grouping of multisource events and transactions is an activity set (Figure 22.2). The activity set acts as a “shoebox” for sequence neutral analysis. In the course of examining data in time and space, an analyst identifies data that appears to be related, but does not know the nature of the relationship. Drawing a box around the data elements, he or she can group them and create an activity set to save them for later analysis, sharing, or linking with other activity sets.

By linking activity sets, the analyst can describe a filtered set of spatial and temporal events as a series of related activities. Typically, linked activity sets form the canvas for information sharing across multiple analysts working the same problem set. The relational view leverages graphs and may also instantiate semantic technologies like the RDF to provide context to relationships.

22.3 Analytic Concepts of Operations

This section describes some of the basic analysis principles widely used in spatiotemporal analysis tools.

22.3.1 Discovery and Filtering

In the traditional, target-based intelligence cycle, analysts would enter a target identifier to pull back all information about the target, exploit that information, report on the target, and then go on to the next target. In ABI analysis, the targets are unknown at the onset of analysis and must be discovered through deductive analytics, reasoning, pattern analysis, and information correlation.

Searching for data may result in querying many distributed databases. Results are presented to the user as map/timeline renderings. Analysts typically select a smaller time slice and animate through the data to exploit transactions or attempt to recognize patterns. This process is called data triage. Instead of requesting information through a precisely phrased query, ABI analytics prefers to bring all available data to analysts’ desktop so they can determine if the information has value. This process implements the ABI principles of data neutrality and integration before exploitation simultaneously. It also places a large burden on query and visualization systems—most of the data returned by the query will be discarded as irrelevant. However, filtering out data a priori risks losing valuable correlatable information in the area of interest.s

22.3.2 Forensic Backtracking

Analysts use the framework for forensic backtracking, an embodiment of the sequence neutral paradigm of ABI. PV Labs describes a system that “indexes data in real time, permitting the data to be used in various exploitation solutions… for backtracking and identifying nodes of other multi-INT sources”.

Exelis also offers a solution for “activity-based intelligence with forensic capabilities establishing trends and interconnected patterns of life including social interactions, origins of travel and destinations” [4].
Key events act as tips to analysts or the start point for forward or forensic analysis of related data.

22.3.3 Watchboxes and Alerts

A geofence is a virtual perimeter used to trigger actions based on geospatial events [6]. Metzger describes how this concept was used by GMTI analysts to provide real-time indication and warning of vehicle motion.

Top analysts continually practice discovery and deductive filtering to update watchboxes with new hypotheses, triggers, and thresholds.

Alerts may result in subsequent analysis or collection. For example, alerts may be sent to collection management authorities with instructions to collect on the area of interest with particular capabilities when events and transactions matching certain filters are detected. When alerts go to collection systems, they are typically referred to as “tips” or “cues.”

22.3.4 Track Linking

As described in Chapter 12, automated track extraction algorithms seldom produce complete tracks from an object’s origin to its destination. Various confounding factors like shadows and obstructions cause track breaks. A common feature in analytic environments is the ability to manually link tracklets based on metadata.

22.4 Advanced Analytics

Another key feature of many ABI analysis tools is the implementation of “advanced analytics”—automated algorithmic processes that automate routine functions or synthesize large data sets into enriched visualizations.

Density maps allow pattern analysis across large spatial areas. Also called “heat maps,” these visualizations sum event and transaction data to create a raster layer with hot spots in areas with large numbers of activities. Data aggregation is defined over a certain time interval. For example, by setting a weekly time threshold and creating multiple density maps, analysts can quickly understand how patterns of activity change from week to week.

Density maps allow analysts to quickly get a sense for where (and when) activities tend to occur. This information is used in different ways depending on the analysis needs. If events are very rare like missile launches or explosions, density maps focus the analyst’s attention to these key events.

In the case of vehicle movement (tracks), density maps identify where most traffic tends to occur. This essentially identifies nondiscrete locations and may serve as a contraindicator for interesting nodes at which to exploit patterns of life. For example, in an urban environment, density maps highlight major shopping centers and crowded intersections. In a remote environment, density maps of movement data may tip analysts to interesting locations.

Other algorithms process track data to find intersections and overlaps. For example, movers with similar speed and heading in close proximity appear as cotravelers. When they are in a line, they may be considered a convoy. When two movers come within a certain proximity for a certain time, this can be characterized as a “meeting.” Mathematical relations with different time and space thresholds identify particular behaviors or compound events.

22.5 Information Sharing and Data Export

Many frameworks feature geoannotations to enhance spatial storytelling. These geospatially and temporally referenced “callout boxes” highlight key events and contain analyst-entered metadata describing a complex series of events and transactions.

Not all analysts operate within an ABI analysis tool but could benefit from the output of ABI analysis. Tracks, image chips, event markers, annotations, and other data in activity sets can be exported in KML, the standard format for Google Earth and many spatial visualization tools. KML files with temporal metadata enable the time slider within Google Earth, allowing animation and playback of the spatial story.

22.6 Summary

Over the past 10 years, several tools have emerged that use the same common core features to aid analysts in understanding large amounts of spatial and temporal data. At the 2014 USGIF GEOINT Symposium, tool vendors including BAE Systems [13, 14], Northrop Grumman [15], General Dynamics [16], Analytical Graphics [17, 18], DigitalGlobe, and Leidos [19] showcased advanced analytics tools similar to the above [20]. Georeferenced events and transactions, temporally explored and correlated with other INT sources allow analysts to exploit pattern-of-life elements to uncover new locations and relationships. These tools continue to develop as analysts find new uses for data sources and develop tradecraft for combining data in unforeseen ways.

23

Pattern Analysis of Ubiquitous Sensors

The “Internet of Things” is an emergent paradigm where sensor-enabled digital devices record and stream increasing volumes of information about the patterns of life of their wearer, operator, holder—the so-called user. We, as the users, leave a tremendous amount of “digital detritus” behind in our everyday activities. Data mining reveals patterns of life, georeferences activities, and resolves entities based on their activities and transactions. This chapter demonstrates how the principles of ABI apply to the analysis of humans, their activities, and their networks…and how these practices are employed by commercial companies against ordinary citizens for marketing and business purposes every day.

23.3 Integrating Multiple Data Sources from Ubiquitous Sensors

Most of the diverse sensor data collected by increasingly proliferated commercial sensors is never “exploited.” It is gathered and indexed “just in case” or “because it’s interesting.” When these data are combined, it illustrates the ABI principles of integration before exploitation and shows how a lot of understanding can be extracted from several data sets registered in time and space, or simply related to one another.

Emerging research in semantic trajectories describes a pattern of life as a sequence of semantic movements (e.g., “he went to the store”) as a natural language representation of large volumes of spatial data [2]. Some research seeks to cluster similar individuals based on their semantic trajectories rather than trying to correlate individual data points mathematically using correlation coefficients and spatial proximities [3].

23.4 Summary

ABI data from digital devices, including self-reported activities and transactions, are increasingly becoming a part of analysis for homeland security, law enforcement, and intelligence activities. The proliferation of such digital data will only continue. Methods and techniques to integrate large volumes of this data in real time and analyze it quickly and cogently enough to make decisions are needed to realize the benefit these data provide. This chapter illustrated visual analytic techniques for discovering patterns in this data, but emergent techniques in “big data analytics” are being used by commercial companies to automatically mine and analyze this ubiquitous sensor data at network speed and massive scale.

24

ABI Now and Into the Future

Patrick Biltgen and David Gauthier

The creation of ABI was the proverbial “canary in the coal mine” for the intelligence community. Data is coming; and it will suffocate your analysts. Compounding the problem, newer asymmetric threats can afford to operate with little to no discernable signature and traditional nation-based threats can afford to hide their signatures from our intelligence capabilities by employing expensive countermeasures. Since its introduction in the mid 2000s, ABI has grown from its roots as a method for geospatial multi-INT fusion for counterterrorism into a catch-all term for automation, advanced analytics, anticipatory analysis, pattern analysis, correlation, and intelligence integration. Each of the major intelligence agencies has adapted a spin on the technique and is pursuing tradecraft and technology programs to implement the principles of ABI.

the core tenets of ABI become increasingly important in the integrated cyber/geospace and consequent threats emerging in the not too distance future.

24.1 An Era of Increasing Change

At the 2014 IATA AVSEC World conference, DNI Clapper said, “Every year, I’ve told Congress that we’re facing the most diverse array of threats I’ve seen in all my years in the intelligence business”

On September 17, 2014, Clapper unveiled the 2014 National Intelligence Strategy (NIS)—for the first time, unclassified in its entirety—as the blueprint for IC priorities over the next four years. The NIS describes three overarching mission areas, strategic intelligence, current operations, and anticipatory intelligence, as well as four mission focus areas, cyberintelligence, counterterrorism, counterproliferation, and counterintelligence [3, p. 6]. For the first time, the cyberintelligence mission is recognized as co-equal to the traditional intelligence missions of counterproliferation and counterintelligence (as shown in Figure 24.1). The proliferation of state and non-state cyber actors and the exploitation of information technology is a dominant threat also recognized by the NIC in Global Trends 2030 [2].
Incoming NGA director Robert Cardillo said, “The nature of the adversary today is agile. It adapts. It moves and communicates in a way it didn’t before. So we must change the way we do business” [4]. ABI represents such a change. It is a fundamental shift in tradecraft and technology for intelligence integration and decision advantage that can be evolved from its counterterrorism roots to address a wider range of threats.

24.2 ABI and a Revolution in Geospatial Intelligence

The ABI revolution at NGA began with grassroots efforts in the early 2000s and evolved as increasing numbers of analysts moved from literal exploitation of images and video to nonliteral, deductive analysis of georeferenced metadata.

The importance of GEOINT to the fourth age of intelligence was underscored by the NGA’s next director, Robert Cardillo, who said “Every modern, local, regional and global challenge—climate change, future energy landscape and more—has geography at its heart”

NGA also released a vision for its analytic environment of 2020, noting that analysts in the future will need to “spend less time exploiting GEOINT primary sources and more time analyzing and understanding the activities, relationships, and patterns discovered from these sources”—implementation of the ABI tradecraft on worldwide intelligence issues.

Figure 24.4 shows the principle of data neutrality in the form of “normalized data services” and highlights the role for “normalcy baselines, activity models, and pattern-of-life analysis” as described in Chapters 14 and 15. OBP, as shown in the center of Figure 24.4, depicts a hierarchical model, perhaps using the graph analytic concepts of Chapter 15 and a nonlinear analytic process that captures knowledge to form judgments and answer intelligence questions. Chapter 16’s concept of models is shown in Figure 24.4 as “normalcy baselines, activity models, and pattern-of-life analysis.” As opposed to the traditional intelligence process that focuses on the delivery of serialized products, the output of the combined ABI/OBP process is an improved understanding of activities and networks.

Sapp also described the operational success story of the Fusion Analysis & Development Effort (FADE) and the Multi-Intelligence Spatial Temporal Tool-suite (MIST), which became operational in 2007 when NRO users recognized that they got more information out of spatiotemporal data when it was animated. NRO designed a “set of tools that help analysts find patterns in large quantities of data” [15]. MIST allows users to temporally and geospatially render millions of data elements, animate them, correlate multiple sources, and share linkages between data using web-based tools. “FADE is used by the Intelligence Community, Department of Defense, and the Department of Homeland Security as an integral part of intelligence cells” [17]. An integrated ABI/multi-INT framework is a core component of the NRO’s future ground architecture [18].

24.5 The Future of ABI in the Intelligence Community

In 1987, the television show Star Trek: The Next Generation, set in the 24th century, introduced the concept of the “communicator badge,” a multifunctional device worn on the right breast of the uniform. The badge represented an organizational identifier, geolocator, health monitoring system, environment sensor, tracker, universal translator, and communications device combined into a 4 cm by 5 cm package.

In megacities, tens of thousands of entities may occupy a single building, and thousands may move in and out of a single city block in a single day. The flow of objects and information in and out of the control volume of these buildings may be the only way to collect meaningful intelligence on humans and their networks because traditional remote sensing modalities will have insufficient resolution to disambiguate entities and their activities. Entity resolution will require thorough analysis of multiple proxies and their interaction with other entity proxies, especially in cases where significant operational security is employed. Absence of a signature of any kind in the digital storm will itself highlight entities of interest. Everything happens somewhere, but if nothing happens somewhere that is a tip to a discrete location of interest.

The methods described in this textbook will become increasingly core to the art of analysis. The customer service industry is already adopting these techniques to provide extreme personalization based upon personal identity and location. Connected data from everyday items networked via the Internet will enable hyperefficient flow of physical materials such as food, energy, and people inside complex geographic distribution systems. Business systems that are created to enable this hyperefficiency, often described as “smart grids” and the “Internet of Things”, will generate massive quantities of transaction data. This data, considered nontraditional by the intelligence community, will become a resource for analytic methods such as ABI to disambiguate serious threats from benign activities.

24.6 Conclusion

The Intelligence Community of 2030 will be entirely comprised of digital natives born after 9/11 who seamlessly and comfortably navigate a complex data landscape that blurs the distinctions between geospace and cyberspace. The topics in this book will be taught in elementary school.

Our adversaries will have attended the same schools, and counter-ABI methods will be needed to deter, deny, and deceive adversaries who will use our digital dependence against us. Devices—those Internet-enabled self-aware transportation and communications technologies—will increasingly behave like humans. Even your washing machine will betray your pattern of life. LAUNDRY-INT will reveal your activities and transactions…where you’ve been and what you’ve done and when you’ve done it because each molecule of dirt is a proxy for someone or somewhere. Your clothes will know what they are doing, and they’ll even know when they are about to be put on.

In the not too distant future, the boundaries between CYBERINT, SIGINT, and HUMINT will blur, but the rich spatiotemporal canvas of GEOINT will still form the ubiquitous foundation upon which all sources of data are integrated.

25

Conclusion

In many disciplines in the early 21st century, a battle rages between traditionalists and revolutionaries. The latter is often comprised of those artists with an intuitive feel for the business. The latter is comprised of the data scientists and analysts who seek to reduce all of human existence to facts, figures, equations, and algorithms.

Activity-Based Intelligence: Principles and Applications introduces methods and technologies for an emergent field but also introduces a similar dichotomy between analysts and engineers. The authors, one of each, learned to appreciate that the story of ABI is not one of victory for either side. In The Signal and the Noise, statistician and analyst Nate Silver notes that in the case of Moneyball, the story of scouts versus statisticians was about learning how to blend two approaches to a difficult problem. Cultural differences between the groups are a great challenge to collaboration and forward progress, but the differing perspectives are also a great strength. In ABI, there is room for both the art and the science; in fact, both are required to solve the hardest problems in a new age of intelligence.

Intelligence analysts in some ways resemble Silver’s scouts. “We can’t explain how we know, but we know” is a phrase that would easily cross the lips of many an intelligence analyst. At times, analysts even have difficulty articulating post hoc the complete reasoning that led to a particular conclusion. This, undeniably, is a very human part of nature. In an incredibly difficult profession, fraught with deliberate attempts to deceive and confuse, analysts are trained from their first day on the job to trust their judgment. It is judgment that is oftentimes unscientific, despite attempts to apply structured analytic techniques (Heuer) or introduce Bayesian thinking (Silver). Complicating this picture is the fissure in the GEOINT analysis profession itself, between traditionalists often focused purely on overhead satellite imagery and revolutionaries, analysts concerned with all spatially referenced data. In both camps, however, intelligence analysis is about making judgments. Despite all the automated tools and algorithms used to process increasingly grotesque amounts of data, at the end of the day a single question falls to a single analyst: “What is your judgment?”

The ABI framework introduces three key principles of the artist frequently criticized by the engineer. First, it seems too simple to look at data in a spatial environment and learn something, but the analysts learned through experience that often the only common metadata is time and location—a great place to start. The second is the preference of correlation over causality. Stories of intelligence are not complete stories with a defined beginning, middle, and end. A causal chain is not needed if correlation focuses analysis and subsequent collection on a key area of interest or the missing clue of a great mystery. The third oft-debated point is the near-obsessive focus on the entity. Concepts like entity resolution, proxies, and incidental collection focus analysts on “getting to who.” This is familiar to leadership analysts, who have for many years focused on high-level personality profiles and psychological analyses. But unlike the focus of leadership analysis—understanding mindset and intent—ABI focuses instead on the most granular level of people problems: people’s behavior, whether those people are tank drivers, terrorists, or ordinary citizens. Through a detailed understanding of people’s movement in space-time, abductive reasoning unlocked possibilities as to the identity and intent of those same people. Ultimately, getting to who gets to the next step—sometimes “why,” sometimes “what’s next.”

Techniques like automated activity extraction, tracking, and data fusion help analysts wade through large, unwieldy data sets. While these techniques are sometimes synonymized with “ABI” or called “ABI enablers,” they are more appropriately termed “ABI enhancers.” There are no examples of such technologies solving intelligence problems entirely absent the analyst’s touch.

The engineer’s world is filled with gold-plated automated analytics and masterfully articulated rule sets for tipping and cueing, but it also comes with a caution. In Silver’s “sabermetrics,” baseball presents possibly the world’s richest data set, a wonderfully refined, well-documented, and above all complete set of data from which to draw conclusions. In baseball, the subjects of data collection do not attempt to deliberately hide their actions or prevent data from being collected on them. The world of intelligence, however, is very different. Intelligence services attempt to gather information on near-peer state adversaries, terrorist organizations, hacker collectives, and many others, all of whom make deliberate, concerted attempts to minimize their data footprint. In the world of state-focused intelligence this is referred to as D&D; in entity-focused intelligence this is called OPSEC. The data is dirty, deceptive, and incomplete. Algorithms alone cannot make sense of this data, crippled by unbounded uncertainty; they need human judgment to achieve their full potential.

NGA director Robert Cardillo, speaking at to the Intelligence & National Security Alliance (INSA) in January 2015, stated “TCPED is dead.” He went on to state that he was not sure if there would be a single acronym to replace it. “ABI, SOM, and [OBP]

— what we call the new way of thinking isn’t important. Changing the mindset is,” Cardillo stated. This acknowledgement properly placed ABI as one of a handful of new approaches in intelligence, with a specific methodology, specific technological needs, and a specific domain for application. Other methodologies will undoubtedly emerge in the efforts of modern intelligence services to adapt to a continually changing and ever more complicated world, which will complement and perhaps one day supplant ABI.
This book provides a deep exposition of the core methods of ABI and a broad survey of ABI enhancers that extend far beyond ABI methods alone. Understanding these principles will ultimately serve to make intelligence analysts more effective at their single goal: delivering information to aid policymakers and warfighters in making complex decisions in an uncertain world.

On Reading the Master’s Thesis of Crimethinc founder Brian Dingledine

Crimethinc is a decentralized network pledged to anonymous collective action.

And yet despite the “anonymous” nature of the organization, every person leaves traces wherever they’ve gone – all those more so in this information age.

Crimethinc’s founder is Brian Dingledine, who I met for the first and only time at an anarchist  convergence in Gainesville, Florida almost 20 years ago.

As his organization keeps coming up in my current research, I decided to give his M.A. thesis – title Nietzsche and Knowledge: A Study of Nietzsche’s Contribution to Philosophy as the Quest for Truth – a read.

I have to admit a certain level of disappointment in it. It was all academic formulations without any of the intoxicating rhetoric of Inside Front, Rolling Thunder, or the Crimethinc books. Not that I’m surprised, but I have to admit I was hoping for something more interesting.

Per the terms of my agreement with UNC I can’t share it should it have a niche interest for anyone else, but I will share the fact that it’s already been digitized so should you want to include the thesis of one of the most important American Anarchist organizers of the past 20 years in your own work to contact the UNC Library.

Notes from Information Warfare Principles and Operations

Notes from the book Information Warfare Principles and Operations by Edward Waltz

***

This ubiquitous and preeminent demand for information has shaped the current recognition that war fighters must be information warriors—capable of understanding the value of information in all of its roles: as knowledge, as target, as weapon.

• Data—Individual observations, measurements, and primitive messages form the lowest level. Human communication, text messages, electronic queries, or scientific instruments that sense phenomena are the major sources of data.

• Information—Organized sets of data are referred to as information. The organizational process may include sorting, classifying, or indexing and linking data to place data elements in relational context for subsequent searching and analysis.

• Knowledge—Information, once analyzed and understood, is knowledge. Understanding of information provides a degree of comprehension of both the static and dynamic relationships of the objects of data and the ability to model structure and past (and future) behavior of those objects. Knowledge includes both static content and dynamic processes. In the military context, this level of understanding is referred to as intelligence.

Information is critical for the processes of surveillance, situation assessment, strategy development, and assessment of alternatives and risks for decision making.

Information in the form of intelligence and the ability to forecast possible future outcomes distinguishes the best warriors.

The control of some information communicated to opponents, by deception (seduction and surprise) and denial (stealth), is a contribution that may provide transitory misperception to an adversary.

The supreme form of warfare uses information to influence the adversary’s perception to subdue the will rather than using physical force.

 

The objective of A is to influence and coerce B to act in a manner favorable to A’s objective. This is the ultimate objective of any warring party—to cause the opponent to act in a desired manner: to surrender, to err or fail, to withdraw forces, to cease from hostilities, and so forth. The attacker may use force or other available influences to achieve this objective. The defender may make a decision known to be in favor of A (e.g., to acknowledge defeat and surrender) or may fall victim to seduction or deception and unwittingly make decisions in favor of A.

Three major factors influence B’s decisions and resulting actions (or reactions) to A’s attack.

The capacity of B to act

The will of B to act

The perception of B

 

 

 

Information warfare operations concepts are new because of the increasing potential (or threat) to affect capacity and perception in the information and perception domains as well as the physical domain. These information operations are also new because these domains are vulnerable to attacks that do not require physical force alone. Information technology has not changed the human element of war. It has, however, become the preeminent means by which military and political decision makers perceive the world, develop beliefs about the conflict, and command their forces.

Information targets and weapons can include the entire civil and commercial infrastructure of a nation. The military has traditionally attacked military targets with military weapons, but IW introduces the notion that all national information sources and processes are potential weapons and targets.

Col. Richard Szafranski has articulated such a view, in which the epistemology (knowledge and belief systems) of an adversary is the central strategic target and physical force is secondary to perceptual force [6].

Economic and psychological wars waged over global networks may indeed be successfully conducted by information operations alone.

Information superiority is the end (objective) of information operations (in the same sense that air superiority is an objective), while the operations are the means of conduct (in the sense that tactical air power is but one tool of conflict).

Since the Second World War, the steady increase in the electronic means of collecting, processing, and communicating information has accelerated the importance of information in warfare in at least three ways.

First, intelligence surveillance and reconnaissance (ISR) technologies have extended the breadth of scope and range at which adversaries can be observed and targeted, extending the range at which forces engage. Second, computation and communication technologies supporting the command and control function have increased the rate at which information reaches commanders and the tempo at which engagements can be conducted. The third area of accelerated change is the integration of information technology into weapons, increasing the precision of their delivery and their effective lethality.

The shift is significant because the transition moves the object of warfare from the tangible realm to the abstract realm, from material objects to nonmaterial information objects. The shift also moves the realm of warfare from overt physical acts against military targets in “wartime” to covert information operations conducted throughout “peacetime” against even nonmilitary targets. This transition toward the dominant use of information (information-based warfare) and even the targeting of information itself (information warfare, proper) [8] has been chronicled by numerous writers.

 

 

According to the Tofflers, the information age shift is bringing about analogous changes in the conduct of business and warfare in ten areas.

  1. Production—The key core competency in both business and warfare is information production.

In business, the process knowledge and automation of control, manufacturing, and distribution is critical to remain competitive in a global market; in warfare, the production of intelligence and dissemination of information is critical to maneuvering, supplying, and precision targeting.

  1. Intangible values—The central resource for business and warfare has shifted from material values (property resources) to intangible information. The ability to apply this information discriminates between success and failure.
  2. Demassification—As information is efficiently applied to both business and warfare, production processes are shifting from mass production (and mass destruction) to precision and custom manufacturing (and intelligence collection, processing, and targeting).
  3. Worker specialization—The workforce of workers and warriors that performs the tangible activities of business and war is becoming increasingly specialized, requiring increased training and commitment to specialized skills.
  4. Continuous change—Continuous learning and innovation characterize the business and workforces of information-based organizations because the information pool on which the enterprise is based provides broad opportunity for understanding and improvement. Peter Senge has described the imperative for these learning organizations in the new information-intensive world [12].
  5. Scale of operations—As organizations move from mass to custom production, the teams of workers who accomplish tangible activities within organizations will become smaller, more complex teams with integrated capabilities. Business units will apply integrated process teams, and military forces will move toward integrated force units.
  6. Organization—Organizations with information networks will transition from hierarchical structure (information flows up and down) toward networks where information flows throughout the organization. Military units will gain flexibility and field autonomy.

8.Management—Integrated, interdisciplinary units and management teams will replace “stovepiped” leadership structures of hierarchical management organizations.

  1. Infrastructure—Physical infrastructures (geographic locations of units, physical placement of materials, physical allocation of resources) will give way to infrastructures that are based upon the utility of information rather than physical location, capability, or vulnerability.
  2. Acceleration of processes—The process loops will become tighter and tighter as information is applied to deliver products and weapons with increasing speed. Operational concurrence, “just-in-time” delivery, and near-real-time control will characterize business and military processes.

 

 

 

an information-based age in which:

  • Information is the central resource for wealth production and power.
  • Wealth production will be based on ownership of information—the creation of knowledge and delivery of custom products based on that knowledge.
  • Conflicts will be based on geoinformation competitions over ideologies and economies.
  • The world is trisected into nations still with premodern agricultural capabilities (first wave), others with modern industrial age capabilities (second wave), and a few with postmodern information age capabilities (third wave).

 

The ultimate consequences, for not only wealth and warfare, will be the result of technology’s impact on infrastructure, which influences the social and political structure of nations, and finally, that impact on the global collection of nations and individuals.

Table 1.2 illustrates one cause-and-effect cascade that is envisioned. The table provides the representative sequence of influences, according to some futurists, that has the potential even to modify our current structure of nation states, which are defined by physical boundaries to protect real property.

 

“Cyberwar is Coming!” by RAND authors John Arquilla and David Ronfeldt distinguished four basic categories of information warfare based on the expanded global development of information infrastructures (Table 1.3) [16].

Net warfare (or netwar)—This form is information-related conflict waged against nation states or societies at the highest level, with the objective of disrupting, damaging, or modifying what the target population knows about itself or the world around it.

The weapons of netwar include diplomacy, propaganda and psychological campaigns, political and cultural subversion, deception or interference with the local media, infiltration of computer databases, and efforts to promote dissident or opposition movements across computer networks [17].

Political warfare—Political power, exerted by institution of national policy, diplomacy, and threats to move to more intense war forms, is the basis of political warfare between national governments.

Economic warfare—Conflict that targets economic performance through actions to influence economic factors (trade, technology, trust) of a nation intensifies political warfare from the political level to a more tangible level

Command and control warfare (C2W)—The most intense level is conflict by military operations that target opponent’s military command and control.

 

The relationships between these forms of conflict may be viewed as sequential and overlapping when mapped on the conventional conflict time line that escalates from peace to war before de-escalation to return to peace

Many describe netwar as an ongoing process of offensive, exploitation, and defensive information operations, with degrees of intensity moving from daily unstructured attacks to focused net warfare of increasing intensity until militaries engage in C2W.

Martin Libicki, has proposed seven categories of information warfare that identify specific type of operations [21].

  1. Command and control warfare—Attacks on command and control systems to separate command from forces;
  2. Intelligence-based warfare—The collection, exploitation, and protection of information by systems to support attacks in other warfare forms;
  3. Electronic warfare—Communications combat in the realms of the physical transfer of information (radioelectronic) and the abstract formats of information (cryptographic);
  4. Psychological warfare—Combat against the human mind;
  5. Hacker warfare—Combat at all levels over the global information infrastructure;
  6. Economic information warfare—Control of economics via control of information by blockade or imperialistic controls;
  7. Cyber warfare—Futuristic abstract forms of terrorism, fully simulated combat, and reality control are combined in this warfare category and are considered by Libicki to be relevant to national security only in the far term.

 

Author Robert Steele has used two dimensions to distinguish four types of warfare.

Steele’s taxonomy is organized by dividing the means of conducting warfare into two dimensions.

  • The means of applying technology to conduct the conflict is the first dimension. High-technology means includes the use of electronic information-based networks, computers, and data communications, while low-technology means includes telephone voice, newsprint, and paper-based information.
  • The type of conflict is the second dimension, either abstract conflict (influencing knowledge and perception) or physical combat.

the principles of information operations apply to criminal activities at the corporate and personal levels (Table 1.5). Notice that these are simply domains of reference, not mutually exclusive domains of conflict; an individual (domain 3), for example, may attack a nation (domain 1) or a corporation (domain 2).

Numerous taxonomies of information warfare and its components may be formed, although no single taxonomy has been widely adopted.

1.5.1 A Functional Taxonomy of Information Warfare

A taxonomy may be constructed on the basis of information warfare objectives, functions (countermeasure tactics), and effects on targeted information infrastructures [29]. The structure of such a taxonomy (Figure 1.3) has three main branches formed by the three essential security properties of an information infrastructure and the objectives of the countermeasures for each.

Availability of information services (processes) or information (content) may be attacked to achieve disruption or denial objectives.

Integrity of information services or content may be attacked to achieve corruption objectives (e.g., deception, manipulation of data, enhancement of selective data over others).

Confidentiality (or privacy) of services or information may be attacked to achieve exploitation objectives.

  • Detection—The countermeasure may be (1) undetected by the target, (2) detected on occurrence, or (3) detected at some time after the after occurrence.
  • Response—The targeted system, upon detection, may respond to the countermeasure in several degrees: (1) no response (unprepared), (2) initiate audit activities, (3) mitigate further damage, (4) initiate protective actions, or (5) recover and reconstitute.

One type of attack, even undetected, may have minor consequences, for example, while another attack may bring immediate and cascading consequences, even if it is detected with response. For any given attack or defense plan, this taxonomy may be used to develop and categorize the countermeasures, their respective counter-countermeasures, and the effects to target systems.

the air force defines information warfare as any action to deny, exploit, corrupt, or destroy the enemy’s information and its functions; protecting ourselves against those actions; and exploiting our own military information functions.

 1.6 Expanse of the Information Warfare Battlespace

As indicated in the definitions, the IW battlespace extends beyond the information realm, dealing with information content and processes in all three realms introduced earlier in our basic functional model of warfare.

  • The physical realm—Physical items may be attacked (e.g., destruction or theft of computers; destruction of facilities, communication nodes or lines, or databases) as a means to influence information. These are often referred to as “hard” attacks.
  • The information infrastructure realm—Information content or processes may be attacked electronically (through electromagnetic transmission or over accessible networks, by breaching information security protections) to directly influence the information process or content without a physical impact on the target. These approaches have been distinguished as indirect or “soft” attacks.
  • The perceptual realm—Finally, attacks may be directly targeted on the human mind through electronic, printed, or oral transmission paths. Propaganda, brainwashing, and misinformation techniques are examples of attacks in this realm.

 

Viewed from an operational perspective, information warfare may be applied across all phases of operations (competition, conflict, to warfare) as illustrated in Figure 1.5.

(Some lament the nomenclature “information warfare” because its operations are performed throughout all of the phases of traditional “peace.” Indeed, net warfare is not at all peaceful, but it does not have the traditional outward characteristics of war.)

Because information attacks are occurring in times of peace, the public and private sectors must develop a new relationship to perform the functions of indication and warning (I&W), security, and response.

1.7 The U.S. Transition to Information Warfare

The U.S. Joint Chiefs of Staff “Joint Vision 2010,” published in 1996, established “information superiority” as the critical enabling element that integrates and amplifies four essential operational components of twenty-first century warfare.

  1. Dominant maneuver to apply speed, precision, and mobility to engage targets from widely dispersed units;
  2. Precision engagement of targets by high-fidelity acquisition, prioritization of targets, and joint force command and control;
  3. Focused logistics to achieve efficient support of forces by integrating information about needs, available transportation, and resources;
  4. Full-dimension protection of systems processes and forces through awareness and assessment of threats in all dimensions (physical, information, perception).

Nuclear and information war are both technology-based concepts of warfare, but they are quite different. Consider first several similarities. Both war forms are conceptually feasible and amenable to simulation with limited scope testing, yet both are complex to implement, and it is difficult to accurately predict outcomes. They both need effective indications and warnings, targeting, attack tasking, and battle damage assessment. Nevertheless, the contrasts in the war forms are significant. Information warfare faces at least four new challenges beyond those faced by nuclear warfare.

The first contrast in nuclear and information war is the obvious difference in the physical effects and outward results of attacks. A nuclear attack on a city and an information warfare attack on the city’s economy and infrastructure may have a similar functionaleffect on its ability to resist an occupying force, but the physical effects are vastly different.

 

Second, the attacker may be difficult to identify, making the threat of retaliatory targeting challenging. Retaliation in kind and in proportion may be difficult to implement because the attacker’s information dependence may be entirely different than the defender’s.

The third challenge is that the targets of information retaliation may include private sector information infrastructures that may incur complex (difficult to predict) collateral damages.

Finally, the differences between conventional and nuclear attacks are distinct. This is not so with information operations that may begin as competition, escalate to conflict, and finally erupt in to large-scale attacks that may have the same functional effects as some nuclear attacks.

In the future, the IW continuum may be able to smoothly and precisely escalate in the dimensions of targeting breadth, functional coverage, and impact intensity. (This does not imply that accurate effects models exist today and that the cascading effects of information warfare are as well understood as nuclear effects, which have been thoroughly tested for over three decades.)

1.8 Information Warfare and the Military Disciplines

Organized information conflict encompasses many traditional military disciplines, requiring a new structure to orchestrate offensive and defensive operations at the physical, information, and perceptual levels of conflict.

1.9 Information and Peace

Information technology not only provides new avenues for conflict and warfare, but it also provides new opportunities for defense, deterrence, deescalation, and peace.

In War and Anti-War, the Tofflers argue that while the third-wave war form is information warfare, the third-wave peace form is also driven by the widespread availability of information to minimize misunderstanding of intentions, actions, and goals of competing parties. Even as information is exploited for intelligence purposes, the increasing availability of this information has the potential to reduce uncertainty in nation states’ understanding of each other. Notice, however, that information technology is a two-edged sword, offering the potential for cooperation and peace, or its use as an instrument of conflict and war. As with nuclear technology, humankind must choose the application of the technology.

information resources provide powerful tools to engage nations in security dialogue and to foster emerging democracies by the power to communicate directly to those living under hostile, undemocratic regimes. The authors recommended four peace-form activities that may be tasked to information peacemakers.

  1. Engage undemocratic states and aid democratic traditions—Information tools, telecommunications, and broadcast and computer networks provide a means to supply accurate news and unbiased editorials to the public in foreign countries, even where information is suppressed by the leadership.
  2. Protect new democracies—Ideological training in areas such as democratic civil/military relationships can support the transfer from military rule to democratic societies.
  3. Prevent and resolve regional conflicts—Telecommunication and network information campaigns provide a means of suppressing ethnonationalist propaganda while offering an avenue to provide accurate, unbiased reports that will abate rather than incite violence and escalation.
  4. Deter crime, terrorism, and proliferation, and protect the environment—Information resources that supply intelligence, indications and warnings, and cooperation between nations can be used to counter transnational threats in each of these areas.

1.10 The Current State of Information Warfare

At the writing of this book, it has been well over a decade since the concept of information warfare was introduced as a critical component of the current revolution in military affairs (RMA).

1.10.1 State of the Military Art

The U.S. National Defense University has established a School of Information Warfare and Strategy curriculum for senior officers to study IW strategy and policy and to conduct directed research at the strategic level.

The United States is investigating transitional and future legal bases for the conduct of information warfare because the character of some information attacks (anonymity, lack of geospatial focus, ability to execute without a “regulated force” of conventional “combatants,” and use of unconventional information weapons) are not consistent with current accepted second-wave definitions in the laws of armed conflict.

1.10.2 State of Operational Implementation

The doctrine of information dominance (providing dominant battlespace awareness and battlespace visualization) has been established as the basis for structuring all command and control architectures and operations. The services are committed to a doctrine of joint operations, using interoperable communication links and exchange of intelligence, surveillance, and reconnaissance (ISR) in a global command and control system (GCCS) with a common software operating environment (COE).

1.10.3 State of Relevant Information Warfare Technology

The technology of information warfare, unlike previous war forms, is driven by commercial development rather than classified military research and development.

Key technology areas now in development include the following:

  • Intelligence, surveillance, and reconnaissance (ISR) and command and control (C2) technologies provide rapid, accurate fusion of all-source data and mining of critical knowledge to present high-level intelligence to information warfare planners. These technologies are applied to understand geographic space (terrain, road networks, physical features) as well cyberspace (computer networks, nodes, and link features).
  • Information security technologies include survivable networks, multilevel security, network and communication security, and digital signature and advanced authentication technologies.
  • Information technologies, being developed in the commercial sector and applicable to information-based warfare, include all areas of network computing, intelligent mobile agents to autonomously operate across networks, multimedia data warehousing and mining, and push-pull information dissemination.
  • Electromagnetic weapon technologies, capable of nonlethal attack of information systems for insertion of information or denial of service.
  • Information creation technologies, capable of creating synthetic and deceptive virtual information (e.g., morphed video, synthetic imagery, duplicated virtual realities).

1.11 Summary

Information warfare is real. Information operations are being conducted by both military and non-state-sponsored organizations today. While the world has not yet witnessed nor fully comprehended the implications of a global information war, it is now enduring an ongoing information competition with sporadic conflicts in the information domain.

 

Szafranski, R., (Col. USAF), “A Theory of Information Warfare: Preparing for 2020,” Airpower Journal, Vol. 9, No. 1, Spring 1995.

Part I
Information-Based Warfare

Information, as a resource, is not like the land or material resources that were central to the first and second waves.

Consider several characteristics of the information resource that make it unique, and difficult to quantify.

  • Information is abstract—It is an intangible asset; it can take the form of an entity (a noun—e.g., a location, description, or measurement) or a process (a verb—e.g., a lock combination, an encryption process, a patented chemical process, or a relationship).
  • Information has multiple, even simultaneous uses—The same unit of information (e.g., the precise location and frequency of a radio transmitter) can be used to exploit the transmissions, to selectively disrupt communications, or to precisely target and destroy the transmitter. Information about the weather can be used simultaneously by opposing forces, to the benefit of both sides.
  • Information is inexhaustible, but its value may perish with time—Information is limitless; it can be discovered, created, transformed, and repeated, but its value is temporal: recent information has actionable value, old information may have only historical value.
  • Information’s relationship to utility is complex and nonlinear—The utility or value of information is not a function simply of its volume or magnitude. Like iron ore, the utility is a function of content, or purity; it is a function of the potential of data, the content of information, and the impact of knowledge in the real world. This functional relationship from data to the impact of knowledge is complex and unique to each application of information technology.

2.1 The Meaning of Information

The observation process acquires data about some physical process (e.g., combatants on the battlefield, a criminal organization, a chemical plant, an industry market) by the measurement and quantification of observed variables. The observations are generally formatted into reports that contain items such as time of observation, location, collector (or sensor or source) and measurements, and the statistics describing the level of confidence in those measurements. An organization process converts the data to information by indexing the data and organizing it in context (e.g., by spatial, temporal, source, content, or other organizing dimensions) in an information base for subsequent retrieval and analysis. The understanding process creates knowledge by detecting or discovering relationships in the information that allow the data to be explained, modeled, and even used to predict future behavior of the process being observed. At the highest (and uniquely human) level, wisdom is the ability to effectively apply knowledge to implement a plan or action to achieve a desired goal or end state.

We also use the terminology creation or discovery to refer to the effect of transforming data into useful knowledge. Several examples of discovering previously unknown knowledge by the processes of analyzing raw data include the detection or location of a battlefield target, the identification of a purchasing pattern in the marketplace, distinguishing a subtle and threatening economic action, the cataloging of the relationships between terrorist cells, or the classification of a new virus on a computer network.

The authors of the Measurement of Meaning have summed up the issue:

[Meaning] certainly refers to some implicit process or state which must be inferred from observables, and therefore it is a sort of variable that contemporary psychologists would avoid dealing with as long as possible. And there is also, undoubtedly, the matter of complexity—there is an implication in the philosophical tradition that meanings are uniquely and infinitely variable, and phenomena of this kind do not submit readily to measurement [2].

In the business classic on the use of information, The Virtual Corporation, Davidow and Malone [3] distinguish four categories of information (Table 2.2).

  • Content information—This describes the state of physical or abstract items. Inventories and accounts maintain this kind of information; the military electronic order of battle (EOB) is content information.
  • Form information—This describes the characteristics of the physical or abstract items; the description of a specific weapon system in the EOB is a form.
  • Behavior information—In the form of process models this describes the behavior of objects or systems (of objects); the logistics process supporting a division on the battlefield, for example, may be modeled as behavior information describing supply rate, capacity, and volume.
  • Action information—This is the most complex form, which describes reasoning processes that convert information to knowledge, upon which actions can be taken. The processes within command and control decision support tools are examples of Davidow’s action information category.

In a classic text on strategic management of information for business, Managing Information Strategically, the authors emphasized the importance of understanding its role in a particular business to develop business strategy first, then to develop information architectures.

  • Information leverage—In this strategy, IT enables process innovation, amplifying competitive dimensions. An IBW example of this strategy is the application of data links to deliver real-time targeting to weapons (sensor-to-shooter applications) to significantly enhance precision and effectiveness.

Information product—This strategy captures data in existing processes to deliver information or knowledge (a by-product) that has a benefit (market value) in addition to the original process. Intelligence processes in IBW that collect vast amounts of data may apply this strategy to utilize the inherent information by-products more effectively. These by-products may support civil and environmental applications (markets) or support national economic competitive processes [6].

• Information business—The third strategy “sells” excess IT capacity, or information products and services. The ability to share networked computing across military services or applications will allow this strategy to be applied to IBW applications, within common security boundaries.

2.2 Information Science

We find useful approaches to quantifying data, information, and knowledge in at least six areas: the epistemology and logic branches of philosophy, the engineering disciplines of information theory and decision theory, the semiotic theory, and knowledge management. Each discipline deals with concepts of information and knowledge from a different perspective, and each contributes to our understanding of these abstract resources. In the following sections, we summarize the approach to define and study information or knowledge in each area.

2.2.1 Philosophy (Epistemology)

The study of philosophy, concerned with the issues of meaning and significance of human experience, presumes the existence of knowledge and focuses on the interpretation and application of knowledge. Because of this, we briefly consider the contribution of epistemology, the branch of philosophy dealing with the scope and extent of human knowledge, to information science.

Representative of current approaches in epistemology, philosopher Immanuel Kant [7] distinguished knowledge about things in space and time (phenomena) and knowledge related to faith about things that transcend space and time (noumena). Kant defined the processes of sensation, judgment, and reasoning that are applied to derive knowledge about the phenomena. He defined three categories of knowledge derived by judgment:

(1) analytic a priori knowledge is analytic, exact, and certain (such as purely theoretical, imaginary constructs like infinite straight lines), but often uninformative about the world in which we live;

(2) synthetic a priori knowledge is purely intuitive knowledge derived by abstract synthesis (such as purely mathematical statements and systems like geometry, calculus, and logic), which is exact and certain; and

(3) synthetic a posteriori knowledge about the world, which is subject to human sense and perception errors.

2.2.2 Philosophy (Logic)

Philosophy has also contributed the body of logic that has developed the formal methods to describe reasoning. Logic uses inductive and deductive processes that move from premises to conclusions through the application of logical arguments.

The general characteristics of these forms of reasoning can be summarized.

  1. Inductive arguments can be characterized by a “degree of strength” or “likelihood of validity,” while deductive arguments are either valid (the premises are true and the conclusion must always be true) or invalid (as with the non sequitur, in which the conclusion does not follow from the premises). There is no measure of degree or uncertainty in deductive arguments; they are valid or invalid—they provide information or nothing at all.
  2. The conclusions of inductive arguments are probably, but not necessarily, true if all of the premises are true because all possible cases can never be observed. The conclusions of a deductive argument must be true if all of the premises are true (and the argument logic is correct).
  3. Inductive conclusions contain information (knowledge) that was not implicitly contained in the premises. Deductive conclusions contain information that was implicitly contained in the premises. The deductive conclusion makes that information (knowledge) explicit.

To the logician, deduction cannot provide “new knowledge” in the sense that the conclusion is implicit in the premises.

2.2.3 Information Theory

The engineering science of information theory provides a statistical method for quantifying information for the purpose of analyzing the transmission, formatting, storage, and processing of information.

 

2.2.4 Decision Theory

Decision theory provides analytical means to make decisions in the presence of uncertainty and risk by choosing among alternatives. The basis of this choice is determined by quantifying the relative consequences of each alternative and choosing the best alternative to optimize some objective function.

Decision theory distinguishes two categories of utility functions that provide decision preferences on the basis of value or risk [12].

  • Value—These utility functions determine a preferred decision on the basis of value metrics where no uncertainty is present.
  • Risk—These functions provide a preferred decision in the presence of uncertainty (and therefore a risk that the decision may not deliver the highest utility).

While not offering a direct means of measuring information per se, utility functions provide a means of measuring the effect of information on the application in which it is used. The functions provide an intuitive means of measuring effectiveness of information systems.

2.2.5 Semiotic Theory

  1. S. Peirce (1839–1914) introduced philosophical notions, including a “semiotic” logic system that attempts to provide a “critical thinking” method for conceptual understanding of observations (data) using methods of exploratory data analysis [13]. This system introduced the notion of abduction as a means of analyzing and providing a “best explanation” for a set of data. Expanding on the inductive and deductive processes of classical logic, Peirce viewed four stages of scientific inquiry [14].
  • Abduction explores a specific set of data and creates plausible hypotheses to explain the data.
  • Deduction is then applied to refine the hypothesis and develops a testable means of verifying the hypothesis using other premises and sets of data.
  • Induction then develops the general explanation that is believed to apply to all sets of data viewed together in common. This means the explanation should apply to future sets of data.
  • Deduction is finally applied, using the induced template to detect the presence of validated explanations to future data sets.

2.2.6 Knowledge Management

The management of information, in all of its forms, is a recognized imperative in third-wave business as well as warfare. The discipline of “knowledge management” developed in the business domain emphasizes both information exploitation (identified in Table 2.5) and information security as critical for businesses to compete in the third-wave marketplace.

Information Value (I v ) = [Assets − Liabilities] − Total Cost of Ownership

Where, assets include
At = The assets derived from the information at time of arrival

An = The assets if the information did not arrive;
Lt = The liabilities derived from the information at time of arrival;

Ln = The liabilities if the information did not arrive;
In = Total cost associated with the information;
I1 = The cost to generate the information;
I2 = The cost to format the information;
I3 = The cost to reformat the information;
I4 = The cost to duplicate the information;
I5 = The cost to transmit or transport the information (distribute);

I6 = The cost to store the information;
I7 = The cost to use the information, including retrieval.

 

The objective of knowledge management is ultimately to understand the monetary value of information. These measures of the utility of information in the discipline of business knowledge management are based on capital values.

 

2.4 Measuring the Utility of Information in Warfare

The relative value of information can be described in terms of the information performance within the information system, or in terms of the effectiveness (which relates the utility), or the ultimate impact of information on the user.

Utility is a function of both the accuracy and timeliness of information delivered to the user. The utility of estimates of the state of objects, complex situations, or processes is dependent upon accuracies of locations of objects, behavioral states, identities, relationships, and many other factors. Utility is also a function of the timeliness of information, which is often perishable and valueless after a given period. The relationships between utility and many accuracy and timeliness variables are often nonlinear and always highly dependent upon both the data collection means and user application.

The means by which the utility of information and derived knowledge is enhanced in practical systems usually includes one (or all) of four categories of actions.

  • Acquire the right data—The type, quality, accuracy, timeliness, and rate of data collected have a significant impact on knowledge delivered.
  • Optimize the extraction of knowledge—The processes of transforming data to knowledge may be enhanced or refined to improve efficiency, throughput, end-to-end speed, or knowledge yield.
  • Distribute and apply the knowledge—The products of information processes must be delivered to users on time, in understandable formats, and in sufficient quantity to provide useful comprehension to permit actions to be taken.
  • Ensure the protection of information—In the competitive and conflict environments, information and the collection, processing, and distribution channels must be protected from all forms of attack. Information utility is a function of both reliability for and availability to the user.

Metrics in a typical military command and control system that may be used to measure information performance, effectiveness, and military utility, respectively.

  • Sensor detection performance at the data level influences the correlation performance that links sensor data, and therefore the inference process that detects an opponent’s hostile action (event).
  • Event detection performance (timeliness and accuracy) influences the effectiveness of reasoning processes to assess the implications of the event.
  • Effectiveness of the assessment of the impact on military objectives influences the decisions made by commanders and, in turn, the outcome of those responses. This is a measure of the utility of the entire information process. It is at this last step that knowledge is coupled to military decisions and ultimately to military utility.

2.5 Translating Science to Technology

information, as process and content, is neither static nor inorganic. To view information as the static organized numbers in a “database” is a limited view of this resource. Information can be dynamic process models, capable of describing complex future behavior based on current measurements. Information also resides in humans as experience, “intuitive” knowledge, and other perceptive traits that will always make the human the valuable organic element of information architectures.

3

The Role of Technology in Information-Based Warfare

We now apply the information science principles developed in the last chapter to describe the core informationprocessing methods of information-based warfare: acquisition of data and creation of “actionable” knowledge.

The knowledge-creating process is often called exploitation—the extraction of military intelligence (knowledge) from collected data. These are the processes at the heart of intelligence, surveillance, and reconnaissance (ISR) systems and are components of most command and control (C2) systems. These processes must be understood because they are, in effect, the weapon factories of information-based warfare and the most lucrative targets of information warfare [1].

3.1 Knowledge-Creation Processes

Knowledge, as described in the last chapter, is the result of transforming raw data to organized information, and then to explanations that model the process from which the data was observed. The basic reasoning processes that were introduced to transform data into understandable knowledge apply the fundamental functions of logical inference.

In each reasoning case, collected data is used to make more general or more specific inferences about patterns in the data to detect the presence of entities, events, or relationships that can be used to direct the actions of the user to achieve some objective.

In the military or information warfare domain, these methods are used in two ways. First, both abduction (dealing with specific cases) and induction (extending to general application) are used to learn templates that describe discernible patterns of behavior or structure (of an opponent). Because both are often used, we will call this stage abduction-induction [2].

Second, deductive processes are used in the exploitation or intelligence analysis to detect and understand situations and threats based on the previously learned patterns. This second phase often occurs in a hierarchy of knowledge elements.

3.2 Knowledge Detection and Discovery

Two primary categories of knowledge-creation processes can be distinguished, based on their approach to inference. Each is essential to information-based warfare exploitation processes that seek to create knowledge from volumes of data described.

The abductive-inductive process, data mining, discovers previously unrecognized patterns in data (new knowledge about characteristics of an unknown pattern class) by searching for patterns (relationships in data) that are in some sense “interesting.” The discovered candidates are usually presented to human users for analysis and validation before being adopted as general cases.

The deductive exploitation process, data fusion, detects the presence of previously known patterns in many sources of data (new knowledge about the existence of a known pattern in the data) by searching for specific templates in sensor data streams to understand a local environment.

datasets used by these processes for knowledge creation are incomplete and dynamic and contain data contaminated by noise. These factors make the following process characteristics apply:

  • Pattern descriptions—Data mining seeks to induce general pattern descriptions (reference patterns, templates, or matched filters) to characterize data understood, while data fusion applies those descriptions to detect the presence of patterns in new data.
  • Uncertainty in inferred knowledge—The data and reference patterns are uncertain, leading to uncertain beliefs or knowledge.
  • Dynamic state of inferred knowledge—The process is sequential and inferred knowledge is dynamic, being refined as new data arrives.
  • Use of domain knowledge—Knowledge about the domain (e.g., constraints or context) may be used in addition to observed data.

 

3.3 Knowledge Creation in the OODA Loop

The observe-orient-decide-act (OODA) model of command and control introduced earlier in Chapter 1 may now be expanded to show the role of the knowledge-creation processes in the OOD stages of the loop. Figure 3.3 details these information functions in the context of the loop.

Observe functions include technical and human collection of data. Sensing of signals, pixels, and words (signals, imagery, and human intelligence) forms the core of information-based warfare observation.

Orient functions include data mining to discover or learn previously unknown characteristics in the data that can be used as templates for detection and future prediction in data fusion processes.

Decide functions include both automated and human processes. Simple, rapid responses can be automated upon the detection of preset conditions, while the judgment of human commanders is required for more complex, critical decisions that allow time for human intervention.

3.4 Deductive Data Fusion

Data fusion is an adaptive knowledge-creation process in which diverse elements of similar or dissimilar observations (data) are aligned, correlated, and combined into organized and indexed sets (information), which are further assessed to model, understand, and explain (knowledge) the makeup and behavior of a domain under observation

The process is performed cognitively by humans in daily life (e.g., combining sight, sound, and smells to detect a threat) and has long been applied for manual investigations in the military, intelligence, and law enforcement. In recent decades, the automation of this process has been the subject of intense research and development within the military, particularly to support intelligence and command and control

Deduction is performed at the data, information, and knowledge levels.

The U.S. DoD Joint Directors of Laboratories (JDL) have established a reference process model of data fusion that decomposes the process into four basic levels of information-refining processes (based upon the concept of levels of information abstraction).

  • Level 1: object refinement—Correlation of all data to refine individual objects within the domain of observation. (The JDL model uses the term object to refer to real-world entities; however, the subject of interest may be a transient event in time as well.)
  • Level 2: situation refinement—Correlation of all objects (information) within the domain to assess the current situation.
  • Level 3: meaning refinement—Correlation of the current situation with environmental and other constraints to project the meaning of the situation (knowledge). (The meaning of the situation refers to its implications to the user, such as threat, opportunity, or change. The JDL adopted the terminology threat refinement for this level; however, we adopt meaning refinement as a more general term encompassing broader applications than military threats.)
  • Level 4: process refinement—Continual adaptation of the fusion process to optimize the delivery of knowledge against a defined knowledge objective.

The technology development in data fusion has integrated disciplines such as the computer sciences, signal processing, pattern recognition, statistical analysis, and artificial intelligence to develop R&D and operational systems.

3.5 Abductive-Inductive Data Mining

Data mining is a knowledge-creation process in which large sets of data (in data warehouses) are cleansed and transformed into organized and indexed sets (information), which are then analyzed to discover hidden and implicit but previously undefined patterns that reveal new understanding of general structure and relationships (knowledge) in the data of a domain under observation.

The object of discovery is a “pattern,” which is defined as a statement in some language, L, that describes relationships in subset Fs of a set of data F such that:

  1. The statement holds with some certainty, c;
  2. The statement is simpler (in some sense) than the enumeration of all facts in Fs [11].

Mined knowledge, then, is formally defined as a pattern that is (1) interesting, according to some user-defined criterion, and (2) certain to a userdefined measure of degree.

Data mining (also called knowledge discovery) is distinguished from data fusion by two key characteristics.

  • Inference method—Data fusion employs known patterns and deductive reasoning, while data mining searches for hidden patterns using abductive-inductive reasoning.
  • Temporal perspective—The focus of data fusion is retrospective (determining current state based on past data), while data mining is both retrospective and prospective, focused on locating hidden patterns that may reveal predictive knowledge.

While there is no standard reference model for fusion, the general stages of the process as shown in Figure 3.5 illustrate a similarity to the data fusion process [14–16]. Beginning with sensors and sources, the data warehouse is populated with data, and successive functions move the data toward learned knowledge at the top. The sources, queries, and mining processes may be refined, similar to data fusion. The functional stages in the figure are described in the sections that follow.

Data Warehouse

Data from many sources are collected and indexed in the warehouse, initially in the native format of the source. One of the chief issues facing many mining operations is the reconciliation of diverse databases that have different formats (e.g., field and record sizes or parameter scales), incompatible data definitions, and other differences. The warehouse collection process (flow-in) may mediate between these input sources to transform the data before storing in common form [17].

Data Cleansing

The warehoused data must be inspected and cleansed to identify and correct or remove conflicts, incomplete sets, and incompatibilities common to combined databases. Cleansing may include several categories of checks.

  • Uniformity checks verify the ranges of data, determine if sets exceed limits, and verify that formats versions are compatible.
  • Completeness checks evaluate the internal consistency of datasets to make sure , for example, that aggregate values are consistent with individual data components (e.g., “verify that total sales is equal to sum of all regional sales, and that data for all sales regions is present”).
  • Conformity checks exhaustively verify that each index and reference exists.
  • Genealogy checks generate and check audit trails to primitive data to permit analysts to “drill down” from high-level information.

Data Selection and Transformation

The types of data that will be used for mining are selected on the basis of relevance. For large operations, initial mining may be performed on a small set, then extended to larger sets to check for the validity of abducted patterns. The selected data may then be transformed to organize all data into common dimensions and to add derived dimensions as necessary for analysis.

Data Mining Operations

Mining operations may be performed in a supervised manner in which the analyst presents the operator with a selected set of “training” data in which the analyst has manually determined the existence of pattern classes.

Discovery Modeling

Prediction or classification models are synthesized to fit the data patterns detected. This is the proscriptive aspect of mining: modeling the historical data in the database (the past) to provide a model to predict the future.

Visualization

The human analyst uses visualization tools that allow discovery of interesting patterns in the data. The automated mining operations “cue” the operator to

discovered patterns of interest (candidates), and the analyst then visualizes the pattern and verifies if, indeed, it contains new and useful knowledge.

On-line analytic processing (OLAP) refers to the manual visualization process in which a data manipulation engine allows the analyst to create data views from the human perspective, and to perform the following categories of functions:

  1. Multidimensional analysis of the data across dimensions, through relationships (e.g., hierarchies), and in perspectives natural to the analyst (rather than inherent in the data);
  2. Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
  3. Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
  4. Reach through from information levels to the underlying raw data, including reaching beyond the information base back to raw data by the audit trail generated in genealogy checking;
  5. Modeling of hypothetical explanations of the data, in terms of trend analysis and extrapolations.

Refinement Feedback

The analyst may refine the process by adjusting the parameters that control the lower level processes, as well as requesting more or different data on which to focus the mining operations.

3.6 Integrating Information Technologies

On-line analytic processing (OLAP) refers to the manual visualization process in which a data manipulation engine allows the analyst to create data views from the human perspective, and to perform the following categories of functions:

  1. Multidimensional analysis of the data across dimensions, through relationships (e.g., hierarchies), and in perspectives natural to the analyst (rather than inherent in the data);
  2. Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
  3. Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
  4. Reach through from information levels to the underlying raw data, including reaching beyond the information base back to raw data by the audit trail generated in genealogy checking;
  5. Modeling of hypothetical explanations of the data, in terms of trend analysis and extrapolations.

 

3.6 Integrating Information Technologies

It is natural that a full reasoning process would integrate the discovery processes of data mining with the detection processes of data fusion to coordinate learning and application activities.

(Nonliteral target signatures refer to those signatures that extend across many diverse observation domains and are not intuitive or apparent to analysts, but may be discovered only by deeper analysis of multidimensional data.)

3.7 Summary

The automation of the reasoning processes of abduction, induction, and deduction provides the ability to create actionable knowledge (military intelligence) from large volumes of data collected in IBW. As the value of information increases in all forms of information warfare, even more so is the importance of developing these reasoning technologies. While the scope of the global information infrastructure (and global sensing) increases, these technologies are required to extract meaning (and commercial value) from the boundless volumes of data available.

Data fusion and mining processes are yet on the initial slope of the technology development curve, and development is fueled by significant commercial R&D investments. Integrated reasoning tools will ultimately provide robust discovery and detection of knowledge for both business competition and information warfare.

4

Achieving Information Superiority Through Dominant Battlespace Awareness and Knowledge

The objective of information-based warfare is ultimately to achieve military goals with the most efficient application of information resources. Fullspectrum dominance is the term used to describe this effective application of military power by information-based planning and execution of military opera-tions. The central objective is the achievement of information superiority or dominance. Information superiority is the capability to collect, process, and disseminate an uninterrupted flow of information while exploiting or denying an adversary’s ability to do the same. It is that degree of dominance in the information domain that permits the conduct of operations without effective opposition

Dominant battlespace awareness (DBA)—The understanding of the current situation based, primarily, on sensor observations and human sources;

Dominant battlespace knowledge (DBK)—The understanding of the meaning of the current situation, gained from analysis (e.g., data fusion or simulation).

DBK is dependent upon DBA, and DBA is dependent on the sources of data that observe the battlespace. Both are necessary for information superiority.

4.1 Principles of Information Superiority

Information superiority is a component of an overall strategy for application of military power and must be understood in that context.

Massed effects are achieved by four operating concepts that provide a high degree of synergy from widely dispersed forces that perform precision targeting of high-lethality weapons at longer ranges.

  1. Dominant maneuver—Information superiority will allow agile organizations with high-mobility weapon systems to attack rapidly at an aggressor’s centers of gravity across the full depth of the battlefield. Synchronized and sustained attacks will be achieved by dispersed forces, integrated by an information grid.
  2. Precision engagement—Near-real-time information on targets will permit responsive command and control, and the ability to engage and reengage targets with spatial and temporal precision (“at the right place, just at the right time”).
  3. Focused logistics—Information superiority will also enable efficient delivery of sustainment packages throughout the battlefield, optimizing the logistic process.
  4. Full-dimension protection—Protection of forces during deployment, maneuver, and engagement will provide freedom of offensive actions and can be achieved only if superior information provides continuous threat vigilance.

Information superiority must create an operational advantage to benefit the applied military power and can be viewed as a precondition for these military operations in the same sense that air superiority is viewed as a precondition to certain strategic targeting operations.

DBA provides a synoptic view, in time and space, of the conflict and supplies the commander with a clear perception of the situation and the consequences of potential actions. It dispels the “fog of war” described by Clausewitz.

To be effective, DBA/DBK also must provide a consistent view of the battlespace, distributed to all forces—although each force may choose its own perspective of the view. At the tactical level, a continuous dynamic struggle occurs between sides, and the information state of a side may continuously change from dominance, to parity, to disadvantage.

The information advantage delivered by DBA/DBK has the potential to deliver four categories of operational benefits:

Battlespace preparation—Intelligence preparation of the battlespace (IPB) includes all activities to acquire an understanding of the physical, political, electronic, cyber, and other dimensions of the battlespace. Dimensions such as terrain, government, infrastructure, electronic warfare, and telecommunication/computer networks are mapped to define the structure and constraints of the battlespace [10]. IPB includes both passive analysis and active probing of specific targets to detail their characteristics. Orders of battle and decision-making processes are modeled, vulnerabilities and constraints on adversaries’ operations are identified, and potential offensive responses are predicted. The product of this function is comprehension of the battlespace environment.

Battlespace surveillance and analysis—Continuous observation of the battlespace and analysis of the collective observations provide a detailed understanding of the dynamic states of individual components, events, and behaviors from which courses of action and intents can be inferred. The product is comprehensive state information.

Battlespace visualization—This is the process by which the commander (1) develops a clear understanding of the current state with relation to the enemy and environment, (2) envisions a desired end state that represents mission accomplishment, and then (3) subsequently visualizes the sequence of activities that moves the commander’s force from its current state to the end state. The product of this visualization is human comprehension and a comprehensive plan.

Battlespace awareness dissemination—Finally, the components of awareness and knowledge are distributed to appropriate participants at appropriate times and in formats compatible with their own mission. The product here is available and “actionable” knowledge.

4.1.1 Intelligence, Surveillance, and Reconnaissance (ISR)

Intelligence, the information and knowledge about an adversary obtained through observation, investigation, analysis, or understanding, is the product that provides battlespace awareness.

The process that delivers strategic and operational intelligence products is generally depicted in cyclic form (Figure 4.3), with six distinct phases :

  • Collection planning—Government and military decision makers define, at a high level of information abstraction, the knowledge that is required to make policy, strategy, or operational decisions. The requests are parsed into information required to deduce the required answers. This list of information is further parsed into the individual elements of data that must be collected to form that required information base. The required data is used to establish a plan of collection, which details the elements of data needed and the targets (people, places, and things) from which the data may be obtained.
  • Collection—Following the plan, human and technical sources of data are tasked to perform the collection. Table 4.4 summarizes the major collection sources, which include both open and closed access sources and human and technical means of acquisition.
  • Processing—The collected data is indexed and organized in an information base, and progress on meeting the requirements of the collection plan is monitored. As a result of collection, this organized data may adjust the plan on the basis of received data.
  • Analysis—The organized information base is processed using deductive inference techniques (described earlier in Chapter 3) that fuse all source data in an attempt to answer the requester’s questions.
  • Production—Intelligence may be produced in the format of dynamic visualizations on a war fighter’s weapon system or in formal reports to policymakers. Three categories of formal strategic and tactical intelligence reports are distinguished by their past, present, and future focus: (1) current intelligence reports are news-like reports that describe recent events or indications and warnings; (2) basic intelligence reports provide complete descriptions of a specific situation (order of battle or political situation, for example); and (3) intelligence estimates attempt to predict feasible future outcomes as a result of current situations, constraints, and possible influences [16].
  • Application—The intelligence product is disseminated to the user, providing answers to queries and estimates of accuracy of the product delivered. Products range from strategic intelligence estimates in the form of large hardcopy or softcopy documents for policy makers, to real-time displays that visualize battlespace conditions for a war fighter.

 

4.1.1.1 Sources of Intelligence Data

A taxonomy of intelligence data sources (Table 4.4) includes sources that are openly accessible or closed

two HUMINT sources are required to guide the technical intelligence sources. HUMINT source A provides insight into trucking routes to be used, allowing video surveillance to be focused on most likely traffic points. HUMINT source B, closely related to crop workers, monitors the movements of harvesting crews, providing valuable cueing for airborne sensors to locate crops and processing facilities. The technical sources also complement the HUMINT sources by providing verification of uncertain cues and hypotheses for the HUMINT sources to focus attention.

4.1.1.3 Automated Intelligence Processing

The intelligence process must deal with large volumes of source data, converting a wide range of text, imagery, video, and other media types into processed products. Information technology is providing increased automation of the information indexing, discovery, and retrieval (IIDR) functions for intelligence, especially the exponentially increasing volumes of global OSINT

The information flow in an automated or semiautomated facility (depicted in Figure 4.5) requires digital archiving and analysis to ingest continuous streams of data and manage large volumes of analyzed data. The flow can be broken into three phases: capture and compile, preanalysis, and exploitation (analysis).

The preanalysis phase indexes each data item (e.g., article, message, news segment, image, or book chapter) by (1) assigning a reference for storage; (2) generating an abstract that summarizes the content of the item and metadata describing the source, time, reliability-confidence, and relation to other items (“abstracting”); and (3) extracting critical descriptors that characterize the contents (e.g., keywords) or meaning (“deep indexing”) of the item for subsequent analysis. Spatial data (e.g., maps, static imagery, video imagery) must be indexed by spatial context (spatial location) and content (imagery content). The indexing process applies standard subjects and relationships, maintained in a lexicon and thesaurus that is extracted from the analysis information base. Following indexing, data items are clustered and linked before entry into the analysis base. As new items are entered, statistical analyses are performed to monitor trends or events against predefined templates that may alert analysts or cue their focus of attention in the next phase of processing. For example, if analysts are interested in relationships between nations A and B, all reports may be scored for a “tension factor” between those nations, and alerts may be generated on the basis of frequency, score intensity, and sources of incoming data items.

The third, exploitation, phase of processing presents data to the human intelligence analyst for examination using visualization tools to bring to focus the most meaningful and relevant data items and their interrelationships.

The categories of automated tools that are applied to the analysis information base include the following [25]:

  • Interactive search and retrieval tools permit analysts to search by topic, content, or related topics using the lexicon and thesaurus subjects.
  • Structured judgment analysis tools provide visual methods to link data, synthesize deductive logic structures, and visualize complex relationships between datasets. These tools enable the analyst to hypothesize, explore, and discover subtle patterns and relationships in large data volumes—knowledge that can be discerned only when all sources are viewed in a common context.
  • Modeling and simulation tools model hypothetical activities, allowing modeled (expected) behavior to be compared to evidence for validation or projection of operations under scrutiny.
  • Collaborative analysis tools permit multiple analysts in related subject areas, for example, to collaborate on the analysis of a common subject.
  • Data visualization tools present synthetic views of data and information to the analyst to permit patterns to be examined and discovered. Table 4.6 illustrates several examples of visualization methods applied to the analysis of large-volume multimedia data.

 

4.2 Battlespace Information Architecture

We have shown that dominant battlespace awareness is achieved by the effective integration of the sensing, processing, and response functions to provide a comprehensive understanding of the battlespace, and possible futures and consequences.

At the lowest tier is the information grid, an infrastructure that allows the flow of information from precision sensors, through processing, to precision forces.

This tier is the forward path observe function of the OODA loop, and the feedback path distribution channel to control the act function of the loop and collaborative exchange paths. The grid provides for secure, robust transfer of four categories of information (Table 4.8) across the battlespace: (1) information access, (2) messaging, (3) interpersonal communications, and (4) publishing or broadcasting.

Precision information direction tailors the flow of information on the grid, responding dynamically to the environment to allocate resources (e.g., bandwidth and content) to meet mission objectives. The tier includes the data fusion and mining processes that perform the intelligence-processing functions described in previous sections. These processes operate over the information grid, performing collaborative assessment of the situation and negotiation of resource allocations across distributed physical locations

The highest tier is effective force management, which interacts with human judgment to provide the following:

• Predictive planning and preemption—Commanders are provided predictions and assessments of likely enemy and planned friendly COAs with expected outcomes and uncertainties. Projections are based upon the information regarding state of forces and environmental constraints (e.g., terrain and weather). This function also provides continuous monitoring of the effectiveness of actions and degree of mission accomplishment. The objective of this capability is to provide immediate response and preemption rather than delayed reaction.

• Integrated force management—Because of the information grid and comprehensive understanding of the battlespace, force operations can be dynamically synchronized across echelons, missions, components, and coalitions. Both defense and offense can be coordinated, as well as the supporting functions of deployment, refueling, airlift, and logistics.

• Execution of time-critical missions—Time-critical targets can be prosecuted by automatic mission-to-target and weapon-to-target pairings, due to the availability (via the information grid) of immediate sensorderived targeting information. Detection and cueing of these targets permit rapid targeting and attack by passing targeting data (e.g., coordinates, target data, imagery) to appropriate shooters.

Force management is performed throughout the network, with long-term, high-volume joint force management occurring on one scale, and time-critical, low-volume, precision sensor-toshooter management on another. Figure 4.7 illustrates the distinction between the OODA loop processes of the time-critical sensor-to-shooter mission and the longer term theater battle management mission.

4.3 Summary

Dominant battlespace awareness and knowledge is dependent upon the ability to both acquire and analyze the appropriate data to comprehend the meaning of the current situation, the ability to project possible future courses of action, and the wisdom to know when sufficient awareness is achieved to act.

Part II
Information Operations for Information Warfare

5

Information Warfare Policy, Strategy, and Operations

Preparation for information warfare and the conducting of all phases of information operations at a national level requires an overarching policy, an implementing strategy developed by responsible organizations, and the operational doctrine and personnel to carry out the policy.

Information warfare is conducted by technical means, but the set of those means does not define the military science of C2W or netwar. Like any form of competition, conflict, or warfare, there is a policy that forms the basis for strategy, and an implementing strategy that governs the tactical application of the technical methods. While this is a technical book describing the methods, the system implementations of information warfare must be understood in the context of their guiding implementation.

Because of the uncertainty of consequences and the potential impact of information operations on civilian populations, policy and strategy must be carefully developed to govern the use of information operations technologies—technologies that may even provide capabilities before consequences are understood and policies for their use are fully developed.

5.1 Information Warfare Policy and Strategy

The technical methods of information warfare are the means at the bottom of a classical hierarchy that leads from the ends (objectives) of national security policy. The hierarchy proceeds from the policy to an implementing strategy, then to operational doctrine (procedures) and a structure (organization) that applies at the final tactical level the technical operations of IW. The hierarchy “flows down” the security policy, with each successive layer in the hierarchy implementing the security objectives of the policy.

Security Policy

Policy is the authoritative articulation of the position of a nation, defining its interests (the objects being secured), the security objectives for those interests, and its intent and willingness to apply resources to protect those interests. The interests to be secured and the means of security are defined by policy. The policy may be publicly declared or held private, and the written format must be concise and clear to permit the implementing strategy to be traceable to the policy.

Any security policy addressing the potential of information warfare must consider the following premises:

  1. National interest—The national information infrastructure (NII), the object of the information security policy, is a complex structure comprised of public (military and nonmilitary) and private elements. This infrastructure includes the information, processes, and structure, all of which may be attacked. The structure, contents, owners, and security responsibilities must be defined to clearly identify the object being

protected. The NII includes abstract and physical property; it does not include human life, although human suffering may be brought on by collateral effects.

  1. New vulnerabilities—Past security due to geographic and political positions of a nation no longer applies to information threats, in which geography and political advantages are eliminated. New vulnerabilities and threats must be assessed because traditional defenses may not be applicable.
  2. Security objective—The desired levels of information security must be defined in terms of integrity, authenticity, confidentiality, nonrepudiation, and availability.
  3. Intent and willingness—The nation must define its intent to use information operations and its willingness to apply those weapons. Questions that must be answered include the following:
    • What actions against the nation will constitute sufficient justification to launch information strikes?
    • What levels of information operations are within the Just War Doctrine? What levels fall outside?
    • What scales of operations are allowable, and what levels of direct and collateral damage resulting from information strikes are permissible?
    • How do information operations reinforce conventional operations?
    • What are the objectives of information strikes?
    • What are the stages of offensive information escalation, and how

are information operations to be used to de-escalate crises?

  1. Authority—The security of highly networked infrastructures like the NII requires shared authorities and responsibilities for comprehensive protection; security cannot be assured by the military alone. The authority and roles of public and private sectors must be defined. The national command authority and executing military agencies for offensive, covert, and deceptive information operations must be defined. As in nuclear warfare, the controls for this warfare must provide assurance that only proper authorities can launch offensive actions.
  2. Limitations of means—The ranges and limitations of methods to carry out the policy may be defined. The lethality of information operations, collateral damage, and moral/ethical considerations of conducting information operations as a component of a just war must be defined.
  3. Information weapons conventions and treaties—As international treaties and conventions on the use (first use or unilateral use) of information operations are established, the national commitments to such treaties must be made in harmony with strategy, operations, and weapons development.

essential elements of security policy… that may now be applied to information warfare by analogy include the following:

Defense or protection—This element includes all defensive means to protect the NII from attack: intelligence to assess threats, indications and warning to alert of impending attacks, protection measures to mitigate the effects of attack, and provisions for recovery and restoration. Defense is essentially passive—the only response to attack is internal.

Deterrence—This element is the threat that the nation has the will and capability to conduct an active external response to attack (or a preemptive response to an impending threat), with the intent that that the threat alone will deter an attack. A credible deterrence requires (1) the ability to identify the attacker, (2) the will and capability to respond, and (3) a valued interest that may be attacked [5]. Deterrence includes an offensive component and a dominance (intelligence) component to provide intelligence for targeting and battle damage assessment (BDA) support.

Security Strategy

National strategy is the art and science of developing and using the political, economic, and psychological powers of a nation, together with its armed forces, during peace and war, to secure national objectives.

The strategic process (Figure 5.2) includes both strategy developing activities and a complementary assessment process that continuously monitors the effectiveness of the strategy.

 

The components of a strategic plan will include, as a minimum, the following components:

  • Definition of the missions of information operations (public and private, military and nonmilitary);
  • Identification of all applicable national security policies, conventions, and treaties;
  • Statement of objectives and implementation goals;
  • Organizations, responsibilities, and roles;
  • Strategic plan elements:
    1. Threats, capabilities, and threat projections;
    2. NII structure, owners, and vulnerabilities;
    3. Functional (operational) requirements of IW capabilities (time phased);
    4. Projected gaps in ability to meet national security objectives, and plan to close gaps and mitigate risks;
    5. Organizational plan;
    6. Operational plan (concepts of operations);
    7. Strategic technology plan;
    8. Risk management plan;
  • Performance and effectiveness assessment plan.

5.2 An Operational Model of Information Warfare

Information operations are performed in the context of a strategy that has a desired objective (or end state) that may be achieved by influencing a target (the object of influence).

Information operations are defined by the U.S. Army as

Continuous military operations within the Military Information Environment (MIE) that enable, enhance and protect the friendly force’s ability to collect, process, and act on information to achieve an advantage across the full range of military operations; information operations include interacting with the Global Information Environment (GIE) and exploiting or denying an adversary’s information and decision capabilities

The model recognizes that targets exist in (1) physical space, (2) cyberspace, and (3) the minds of humans. The highest level target of information operations is the human perception of decision makers, policymakers, military commanders, even entire populations. The ultimate targets and the operational objective are to influence their perception to affect their decisions and resulting activities.

for example, the objective perception for targeted leaders may be “overwhelming loss of control, disarray, and loss of support from the populace.”

These perception objectives may be achieved by a variety of physical or abstract (information) means, but the ultimate target and objective is at the purely abstract perceptual level, and the effects influence operational behavior. The influences can cause indecision, delay a decision, or have the effect of biasing a specific decision. The abstract components of this layer include objectives, plans, perceptions, beliefs, and decisions.

Attacks on this intermediate layer can have specific or cascading effects in both the perceptual and physical layers.

5.3 Defensive Operations

The U.S. Defense Science Board performed a study of the defensive operations necessary to implement IW-defense at the national level, and in this section we adapt some of those findings to describe conceptual defensive capabilities at the operational level.

Offensive information warfare is attractive to many [potential adversaries] because it is cheap in relation to the cost of developing, maintaining, and using advanced military capabilities. It may cost little to suborn an insider, create false information, manipulate information, or launch malicious logic-based weapons against an information system connected to the globally shared telecommunication infrastructure. In addition, the attacker may be attracted to information warfare by the potential for large nonlinear outputs from modest inputs

Threat Intelligence, I&W

Essential to defense is the understanding of both the external threats and the internal vulnerabilities that may encounter attack. This understanding is provided by an active intelligence operation that performs external assessments of potential threats [16] and internal assessments of vulnerabilities.

The vulnerability assessment can be performed by analysis, simulation, or testing. Engineering analysis and simulation methods exhaustively search for access paths during normal operations or during unique conditions (e.g., during periods where hardware faults or special states occur). Testing methods employ “red teams” of independent evaluators armed with attack tools to exhaustively scan for access means to a system (e.g., communication link, computer, database, or display) and to apply a variety of measures (e.g., exploitation, disruption, denial of service, or destruction).

Protection Measures (IW-Defense)

Based on assessments of threats and vulnerabilities, operational capabilities are developed to implement protection measures (countermeasures or passive defenses) to deny, deter, limit, or contain attacks against the information infrastructure. All of these means may be adopted as a comprehensive approach, each component providing an independent contribution to overall protection of the infrastructure.

The prevention operations deploy measures at three levels.

Strategic-level activities seek to deter attacks by legal means that ban attacks, impose penalties or punishment on offenders, or threaten reprisals.

Operational security (OPSEC) activities provide security for physical elements of the infrastructure, personnel, and information regarding the infrastructure (e.g., classified technical data).

Technical security (INFOSEC) activities protect hardware, software, and intangible information (e.g., cryptographic keys, messages, raw data, information, knowledge) at the hardware and software levels.

The functions of tactical response include the following:

  • Surveillance—Monitor overall infrastructure status and analyze, detect, and predict effects of potential attacks. Generate alert status reports and warn components of the infrastructure of threat activity and expected events.
  • Mode control—Issue controls to components to modify protection levels to defend against incipient threat activities, and to oversee restoration of service in the postattack period.
  • Auditing and forensic analysis—Audit attack activity to determine attack patterns, behavior, and damage for future investigation, effectiveness analysis, offensive targeting, or litigation.
  • Reporting—Issue reports to command authorities.

5.4 Offensive Operations

Offensive operational capabilities require the capability to identify and specify the targets of attack (targeting) and then to attack those targets. These two capabilities must be able to be performed at all three levels of the operational model, as presented earlier in Section 5.2. In addition to these two, a third offensive capability is required at the highest (perceptual) level of the operational model: the ability to manage the perceptions of all parties in the conflict to achieve the desired end.

Public and civil affairs operations are open, public presentations of the truth (not misinformation or propaganda) in a context and format that achieves perception objectives defined in a perception plan. PSYOPS also convey only truthful messages (although selected “themes” and emphases are chosen to meet objectives) to hostile forces to influence both the emotions and reasoning of decision makers. PSYOPS require careful tailoring of the message (to be culturally appropriate) and selection of the media (to ensure that the message is received by the target population). The message of PSYOPS may be conveyed by propaganda or by actions.

military deception operations are performed in secrecy (controlled by operational security). These operations are designed to induce hostile military leaders to take operational or tactical actions that are favorable to, and exploitable by, friendly combat operations

They have the objective of conveying untruthful information to deceive for one of several specific purposes.

  1. Deceit—Fabricating, establishing, and reinforcing incorrect or preconceived beliefs, or creating erroneous illusions (e.g., strength or weakness, presence or nonexistence);
  2. Denial—Masking operations for protection or to achieve surprise in an attack operation;
  3. Disruption—Creating confusion and overload in the decision-making process;
  4. Distraction—Moving the focus of attention toward deceptive actions or away from authentic actions;
  5. Development—Creating a standard pattern of behavior to develop preconceived expectations by the observer for subsequent exploitation.

All of these perception management operations applied in military combat may be applied to netwar, although the media for communication (the global information infrastructure) and means of deceptive activities are not implemented on the physical battlefield. They are implemented through the global information infrastructure to influence a broader target audience.

Intelligence for Targeting and Battle Damage Assessment

The intelligence operations developed for defense also provide support to offensive attack operations, as intelligence is required for four functions.

  1. Target nomination—Selecting candidate targets for attack, estimating the impact if the target is attacked;
  2. Weaponeering—Selecting appropriate weapons and tactics to achieve the desired impact effects (destruction, temporary disruption or denial of service, reduction in confidence in selected function); the process targets vulnerability, weapon effect, delivery accuracy, damage criteria, probability of kill, and weapon reliability;
  3. Attack plan—Planning all aspects of the attack, including coordinated actions, deceptions, routes (physical, information infrastructure, or perception), mitigation of collateral damage, and contingencies;
  4. Battle damage assessment (BDA)—Measuring the achieved impact of the attack to determine effectiveness and plan reattack, if necessary.

Attack (IW-Offense) Operations

Operational attack requires planning, weapons, and execution (delivery) capabilities. The weapons include perceptual, information, and physical instruments employed to achieve the three levels of effect in the operational model.

Offensive operations are often distinguished as direct and indirect means.

Indirect attacks focus on influencing perception by providing information to the target without engaging the information infrastructure of the target. This may include actions to be observed by the target’s sensors, deception messages, electronic warfare actions, or physical attacks. External information is provided to influence perception, but the target’s structure is not affected.

Direct attacks specifically engage the target’s internal information, seeking to manipulate, control, and even destroy the information or the infrastructure of the target.

Offensive information warfare operations integrate both indirect and direct operations to achieve the desired effects on the target. The effectiveness of attacks is determined by security (or stealth), accuracy, and direct and collateral effects.

5.5 Implementing Information Warfare Policy and Strategy

This chapter has emphasized the flow-down of policy to strategy, and strategy to operations, as a logical, traceable process. In theory, this is the way complex operational capabilities must be developed. In the real world, factors such as the pace of technology, a threatening global landscape, and dynamic national objectives force planners to work these areas concurrently—often having a fully developed capability (or threat) without the supporting policy, strategy, or doctrine to enable its employment (or protection from the threat).

6
The Elements of Information Operations

Information operations are the “continuous military operations within the military information environment that enable, enhance, and protect the friendly force’s ability to collect, process, and act on information to achieve an advantage across the full range of military operations; information operations include interacting with the global information environment and exploiting or denying an adversary’s information and decision capabilities”

Some information operations are inherently “fragile” because they are based on subtle or infrequent system vulnerabilities, or because they rely on transient deceptive practices that if revealed, render them useless. Certain elements of IO have therefore been allocated to the operational military, while others (the more fragile ones) have been protected by OPSEC within the intelligence communities to reduce the potential of their disclosure.

6.1 The Targets of Information Operations

The widely used term information infrastructure refers to the complex of sensing, communicating, storing, and computing elements that comprise a defined information network conveying analog and digital voice, data, imagery, and multimedia data. The “complex” includes the physical facilities (computers, links, relays, and node devices), network standards and protocols, applications and software, the personnel who maintain the infrastructure, and the information itself. The infrastructure is the object of both attack and defense; it provides the delivery vehicle for the information weapons of the attacker while forming the warning net and barrier of defense for the defender. Studies of the physical and abstract structure of the infrastructure are therefore essential for both the defender and the targeter alike.

Three infrastructure categories are most commonly identified.

The global information infrastructure (GII) includes the international complex of broadcast communications, telecommunications, and computers that provide global communications, commerce, media, navigation, and network services between NIIs. (Note that some documents refer to the GII as the inclusion of all NIIs; for our purposes, we describe the GII as the interconnection layer between NIIs.)

The national information infrastructure (NII) includes the subset of the GII within the nation, and internal telecommunications, computers, intranets, and other information services not connected to the GII. The NII is directly dependent upon national electrical power to operate, and the electrical power grid is controlled by components of the NII.

The defense information infrastructure (DII) includes the infrastructure owned and maintained by the military (and intelligence) organizations of the nation for purposes of national security. The DII includes command, control, communications, and computation components as well as dedicated administration elements. These elements are increasingly integrated to the NII and GII to use commercial services for global reach but employ INFOSEC methods to provide appropriate levels of security.

The critical infrastructures identified by the U.S. President’s Commission on Critical Infrastructure Protection (PCCIP) include five sectors

  1. Information and communications (the NII)
  2. Banking and finance
  3. Energy
  4. Physical distribution
  5. Vital human services

Attackers may seek to achieve numerous policy objectives by attacking these infrastructures. In order to achieve these policies, numerous intermediate attack goals may be established that can then be achieved by information infrastructure attacks. Examples of intermediate goals might include the following:

  • Reduce security by reducing the ability of a nation to respond in its own national interest;
  • Weaken public welfare by attacking emergency services to erode public confidence in the sustainment of critical services and in the government;
  • Reduce economic strength to reduce national economic competitiveness.

Two capabilities are required for the NII:

  • Infrastructure protection requires defenses to prevent and mitigate the effects of physical or electronic attack.
  • Infrastructure assurance requires actions to ensure readiness, reliability, and continuity—restricting damage and providing for reconstitution in the event of an attack.

 

The conceptual model provides for the following basic roles and responsibilities:

  • Protected information environment—The private sector maintains protective measures (INFOSEC, OPSEC) for the NII supported by the deterrent measures contributed by the government. Deterrence is aimed at influencing the perception of potential attackers, with the range of responses listed in the figure. The private sector also holds responsibility for restoration after attack, perhaps supported by the government in legally declared emergencies.
  • Attack detection—The government provides the intelligence resources and integrated detection capability to provide indications and warnings (strategic) and alerts (tactical) to structured attacks.
  • Attack response—The government must also ascertain the character of the attack, assess motives and actors, and then implement the appropriate response (civil, criminal, diplomatic, economic, military, or informational).
  • Legal protection—In the United States, the government also holds responsibility (under the Bill of Rights, 1791, and derivative statues cited below) for the protection of individual privacy of information, including oral and wire communications [26]; computers, e-mail, and digitized voice, data, and video [27]; electronic financial records and the transfer of electronic funds [28,29]; and cellular and cordless phones and data communications [30]. This is the basis for civil and criminal deterrence to domestic and international criminal information attacks on the NII.

While the government has defined the NII, the private sector protects only private property, and there is no coordinated protection activity. Individual companies, for example, provide independent protection at levels consistent with their own view of risk, based on market forces and loss prevention.

IO attacks, integrated across all elements of critical infrastructure and targeted at all three levels of the NII, will attempt to destabilize the balance and security of these operations. The objective and methodology is to:

  • Achieve perception objectives at the perceptual level, causing leadership to behave in a desired manner.
  • This perception objective is achieved by influencing the components of the critical infrastructure at the application level.
  • This influence on the critical infrastructure is accomplished through attacks on the information infrastructure, which can be engaged at the physical, information, and perceptual layers.

6.1.3 Defense Information Infrastructure (DII)

The DII implements the functional “information grid”. In the United States, the structure is maintained by the Defense Information Systems Agency (DISA), which established the following definition:

The DII is the web of communications networks, computers, software, databases, applications, weapon system interfaces, data, security services, and other services that meet the information-processing and transport needs of DoD users across the range of military operations. It encompasses the following:

  1. Sustaining base, tactical, and DoD-wide information systems, and command, control, communications, computers, and intelligence (C4I) interfaces to weapons systems.
  2. The physical facilities used to collect, distribute, store, process, and display voice, data, and imagery.
  3. The applications and data engineering tools, methods, and processes to build and maintain the software that allow command and control (C2), intelligence, surveillance, reconnaissance, and mission support users to access and manipulate, organize, and digest proliferating quantities of information.
  4. The standards and protocols that facilitate interconnection and interoperation among networks.
  5. The people and assets that provide the integrating design, management, and operation of the DII, develop the applications and services, construct the facilities, and train others in DII capabilities and use.

Three distinct elements of the U.S. DII are representative of the capabilities required by a third-wave nation to conduct information-based warfare.

6.2 Information Infrastructure War Forms

As the GII and connected NIIs form the fundamental interconnection between societies, it is apparent that this will become a principal vehicle for the conduct of competition, conflict, and warfare. The concept of network warfare was introduced and most widely publicized by RAND authors John Arquilla and David Ronfeldt in their classic think piece, “Cyberwar is Coming!”

The relationships between these forms of conflict may be viewed as sequential and overlapping when mapped on the conventional conflict time line that escalates from peace to war before de-escalation to return to peace (Figure 6.7). Many describe netwar as an ongoing process, with degrees of intensity moving from daily unstructured attacks to focused net warfare of increasing intensity until militaries engage in C2W. Netwar activities are effectively the ongoing, “peacetime”-up-to-conflict components of IO.

6.3 Information Operations for Network Warfare

Ronfeldt and Arquilla define netwar as a societal-level ideational conflict at a grand level, waged in part through Internetted modes of communications. It is conducted at the perceptual level, exploiting the insecurities of a society via the broad access afforded by the GII and NIIs. Netwar is characterized by the following qualities that distinguish it from all other forms:

  • Target—Society at large or influential subsets are targeted to manage perception and influence the resulting opinion. Political, economic, and even military segments of society may be targeted in an orchestrated fashion. The effort may be designed to create and foster dissident or opposition groups that may gain connectivity through the available networks.
  • Media—All forms of networked and broadcast information and communications within the NII of a targeted nation state may be used to carry out information operations. The GII may be the means for open access or illicit penetration of the NII.
  • Means—Networks are used to conduct operations, including (1) public influence (open propaganda campaigns, diplomatic measures, and psychological operations); (2) deception (cultural deception and subversion, misinformation); (3) disruption and denial (interference with media or information services); and (4) exploitation (use of networks for subversive activities, interception of information to support targeting).
  • Players—The adversaries in netwar need not be nation states. Nation states and nonstate organizations in any combination may enter into conflict. As networks increasingly empower individuals with information influence, smaller organizations (with critical information resources) may wage effective netwar attacks.

In subsequent studies, Arquilla and Ronfeldt have further developed the potential emergence of netwar as dominant form of societal conflict in the twenty-first century and have prescribed the necessary preparations for such conflicts. A 1994 U.S. Defense Science Board study concluded that “A large structured attack with strategic intent against the U.S. could be prepared and exercised under the guise of unstructured activities”

6.3.1 A Representative Netwar Scenario

The U.S. defense community, futurists, and security analysts have hypothesized numerous netwar scenarios that integrate the wide range of pure information weapons, tactics, and media that may be applied by future information aggressors.

6.4 Information Operations for Command and Control Warfare (C2W)

Information operations, escalated to physical engagement against military command and control systems, enter the realm of C2W. C2W is “the integrated use of operations security (OPSEC), military deception, psychological operations (PSYOPS), electronic warfare (EW), and physical destruction, mutually supported by intelligence to deny information to, influence, degrade, or destroy adversary command and control capabilities, while protecting friendly command and control capabilities against such actions”.

C2W is distinguished from netwar in the following dimensions:

Target—Military command and control is the target of C2W. Supporting critical military physical and information infrastructures are the physical targets of C2W.

Media—While the GII is one means of access for attack, C2W is characterized by more direct penetration of an opponent’s airspace, land, and littoral regions for access to defense command and control infrastructure. Weapons are delivered by air, space, naval, and land delivery systems, making the C2W overt, intrusive, and violent. This makes it infeasible to conduct C2W to the degree of anonymity that is possible for netwar.

Means—C2W applies physical and information attack means to degrade (or destroy) the OODA loop function of command and control systems, degrading military leaders’ perceptual control effectiveness and command response. PSYOPS, deception, electronic warfare, and physically destructive means are used offensively, and OPSEC provides protection of the attack planning.

Players—The adversaries of C2W are military organizations of nation states, authorized by their governments.

Ronfeldt and Arquilla emphasize that future C2W will be characterized by a revision in structure, as well as operations, to transform the current view of command and control of military operations:

Waging [C2W] may require major innovations in organizational design, in particular a shift from hierarchies to networks. The traditional reliance on hierarchical designs may have to be adapted to network-oriented models to allow greater flexibility, lateral connectivity, and teamwork across institutional boundaries. The traditional emphasis on command and control, a key strength of hierarchy, may have to give way to emphasis on consultation and coordination, the crucial building blocks of network designs

6.5.1 Psychological Operations (PSYOPS)

PSYOPS are planned operations to convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behaviors of foreign governments, organizations, groups, and individuals. The objective of PSYOPS is to manage the perception of the targeted population, contributing to the achievement of larger operational objectives. Typical military objectives include the creation of uncertainty and ambiguity (confusion) to reduce force effectiveness, the countering of enemy propaganda, the encouragement of disaffection among dissidents, and the focusing of attention on specific subjects that will degrade operational capability. PSYOPS are not synonymous with deception; in fact, some organizations, by policy, present only truthful messages in PSYOPS to ensure that they will be accepted by target audiences.

PSYOPS are based on two dimensions: the communication of a message via an appropriate media to a target population (e.g., enemy military personnel or foreign national populations).

PSYOP activities begin with creation of the perception objective and development of the message theme(s) that will create the desired perception in the target population (Figure 6.11). Themes are based upon analysis of the psychological implications and an understanding of the target audience’s culture, preconceptions, biases, means of perception, weaknesses, and strengths. Theme development activities require approval and coordination across all elements of government to assure consistency in diplomatic, military, and economic messages. The messages may take the form of verbal, textual messages (left brain oriented) or “symbols” in graphic or visual form (right brain oriented).

6.5.2 Operational Deception

Military deception includes all actions taken to deliberately mislead adversary military decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of a friendly mission [60]. Deception operations in netwar expand the targets to include society at large, and have the objective of inducing the target to behave in manner (e.g., trust) that contributes to the operational mission.

Deception contributes to the achievement of a perception objective; it is generally not an end objective in itself.

Two categories of misconception are recognized: (1) ambiguity deception aims to create uncertainty about the truth, and (2) misdirection deception aims to create certainty about a falsehood. Deception uses methods of distortion, concealment, falsification of indicators, and development of misinformation to mislead the target to achieve surprise or stealth. Feint, ruse, and diversion activities are common military deceptive actions.

 

Because deception operations are fragile (their operational benefit is denied if detected), operational security must be maintained and the sequencing of deceptive and real (overt) activities must be timed to protect the deception until surprise is achieved. As in PSYOPS, intelligence must provide feedback on the deception effects to monitor the degree to which the deception story is believed.

Deception operations are based on exploitation of bias, sensitivity, and capacity vulnerabilities of human inference and perception (Table 6.9) [61]. These vulnerabilities may be reduced when humans are aided by objective decision support systems, as noted in the table.

Electronic attack can be further subdivided into four fundamental attack categories: exploitation, deception, disruption or denial, or destruction

6.5.5 Intelligence

Intelligence operations contribute assessments of threats (organizations or individuals with inimical intent, capability, and plans); preattack warnings; and postattack investigation of events.

Intelligence can be viewed as a defensive operation at the perception level because it provides information and awareness of offensive PSYOPS and deception operations.

Intelligence on information threats must be obtained in several categories:

  1. Organization threat intelligence—Government intelligence activities maintain watches for attacks and focus on potential threat organizations, conducting counterintelligence operations (see next section) to determine intent, organizational structure, capability, and plans.
  2. Technical threat intelligence—Technical intelligence on computer threats and technical capabilities are supplied by the government, academia, or commercial services to users as services.

 

6.5.6 Counterintelligence

Structured attacks require intelligence gathering on the infrastructure targets, and it is the role of counterintelligence to prevent and obstruct those efforts. Network counterintelligence gathers intelligence on adversarial individuals or organizations (threats) deemed to be motivated and potentially capable of launching a network attack.

6.5.7 Information Security (INFOSEC)

We employ the term INFOSEC to encompass the full range of disciplines to provide security protection and survivability of information systems from attacks, including the most common disciplines,

  • INFOSEC—Measures and controls that protect the information infrastructure against denial of service and unauthorized (accidental or intentional) disclosure, modification, or destruction of information infrastructure components (including data). INFOSEC includes consideration of all hardware and/or software functions, characteristics and/or features; operational procedures, accountability procedures, and access controls at the central computer facility, remote computer, and terminal facilities; management constraints; physical structures and devices; and personnel and communication controls needed to provide an acceptable level of risk for the infrastructure and for the data and information contained in the infrastructure. It includes the totality of security safeguards needed to provide an acceptable protection level for an infrastructure and for data handled by an infrastructure.
  • COMSEC—Measures taken to deny unauthorized persons information derived from telecommunications and to ensure the authenticity of such telecommunications. Communications security includes cryptosecurity, transmission security, emission security, and physical security of communications security material and information.
  • TEMPEST—The study and control of spurious electronic signals emitted by electrical equipment.
  • COMPUSEC—Computer security is preventing attackers from achieving objectives through unauthorized access or unauthorized use of computers and networks.
  • System survivability—The capacity of a system to complete its mission in a timely manner, even if significant portions of the system are incapacitated by attack or accident.

 

6.5.8 Operational Security (OPSEC)

Operations security denies adversaries information regarding intentions, capabilities, and plans by providing functional and physical protection of people, facilities, and physical infrastructure components. OPSEC seeks to identify potential vulnerabilities and sources of leakage of critical indicators [80] to adversaries, and to develop measures to reduce those vulnerabilities. While INFOSEC protects the information infrastructure, OPSEC protects information operations (offensive and defensive).

7

An Operational Concept (CONOPS) for Information Operations

Units or cells of information warriors will conduct the information operations that require coordination of technical disciplines to achieve operational objectives. These cells require the support of planning and control tools to integrate and synchronize both the defensive and offensive disciplines introduced in the last chapter.

This chapter provides a baseline concept of operations (CONOPS) for implementing an offensive and defensive joint service IO unit with a conceptual support tool to conduct sustained and structured C2W. We illustrate the operational-level structure and processes necessary to implement information operations—in support of overall military operations—on a broad scale in a military environment.

 

The 16 essential capabilities identified in the U.S. Joint Warfighter Science and Technology Plan (1997) as necessary to achieve an operational information warfare capability [1].

  1. Information consistency includes the integrity, protection, and authentication of information systems.
  2. Access controls/security services ensures information security and integrity by limiting access to information systems to authorized personnel only. It includes trusted electronic release, multilevel information security, and policies.

3.Service availability ensures that information systems are available when needed, often relying upon communications support for distributed computing.

  1. Network management and control ensures the use of reconfigurable robust protocols and control algorithms, self-healing applications, and systems capable of managing distributed computing over heterogeneous platforms and networks.
  2. Damage assessment determines the effectiveness of attacks in both a defensive capacity (e.g., where and how bad) and an offensive capacity (e.g., measure of effectiveness).
  3. Reaction (isolate, correct, act) responds to a threat, intruder, or network or system disturbance. Intrusions must be characterized and decision makers must have the capability to isolate, contain, correct, monitor surreptitiously, and so forth. The ability to correct includes recovery, resource reallocation, and reconstitution.
  4. Vulnerability assessment and planning is an all-encompassing functional capability that includes the ability to realistically assess the joint war fighter’s information system(s) and information processes and those of an adversary. The assessment of war-fighter systems facilitates the use of critical protection functions such as risk management and vulnerability analysis. The assessment of an adversary’s information system provides the basis for joint war-fighter attack planning and operational execution.
  5. Preemptive indication provides system and subsystem precursors or indications of impending attack.
  6. Intrusion detection/threat warning enables detection of attempted and successful intrusions (malicious and nonmalicious) by both insiders and outsiders.

10.Corruption of adversary information/systems can take many diverse forms, ranging from destruction to undetected change or infection of information. There are two subsets of this function: (1) actions taken on information prior to its entry into an information system, and (2) actions taken on information already contained within an information system.

  1. Defeat of adversary protection includes the defeat of information systems, software and physical information system protection schemes, and hardware.
  2. Penetration of adversary information system provides the ability to intrude or inject desired information into an adversary’s information system, network, or repository. The function includes the ability to disguise the penetration—either the fact that the penetration has occurred or the exact nature of the penetration.

13.Physical destruction of adversary’s information system physically denies an adversary the means to access or use its information systems. Actions include traditional hard kills as well as actions of a less destructive nature that cause a physical denial of service.

  1. Defeat of adversary information transport defeats any means involved in the movement of information either to or within a given information system. It transcends the classical definition of electronic warfare by encompassing all means of information conveyance rather than just the traditional electrical means.
  2. Insertion of false station/operator into an adversary’s information system provides the ability to inject a false situation or operator into an adversary’s information system.
  3. Disguise of sources of attack encompasses all actions designed to deny an adversary any knowledge of the source of an information attack or the source of information itself. Disguised sources, which deny the adversary true information sources, often limit the availability of responses, thereby delaying correction or retaliation.

Concept of Operations (CONOPS) for Information Operations Support System (IOSS)

Section 1 General 1.1 Purpose

This CONOPS describes an information operations support system (IOSS) comprised of integrated and automated tools to plan and conduct offensive and defensive information operations.

This CONOPS is a guidance document, does not specify policy, and is intended for audiences who need a quick overview or orientation to information operations (IO).

1.2 Background

Information operations provide the full-spectrum means to achieve information dominance by: (1) monitoring and controlling the defenses of a force’s information infrastructure, (2) planning activities to manage an adversary’s perception, and (3) coordinating PSYOPS, deception, and intrusive physical and electronic attacks on the adversary’s information infrastructure.

CONOPS provides an overview of the methodology to implement an IO cell supported by the semiautomated and integrated IOSS tools to achieve information dominance objectives. The following operational benefits are accrued:

  • Synchronization—An approach to synchronize all aspects of military operations (intelligence, OPSEC, INFOSEC, PSYOPS, deception, information, and conventional attack) and to deconflict adverse actions between disciplines;
  • Information sharing—The system permits rapid, adaptive collaboration among all members of the IO team;
  • Decision aiding—An automated process to manage all IO data, provide multiple views of the data, provide multiple levels of security, and aid human operators in decision making.

 

3.3 Operational Process

Defensive planning is performed by the OPSEC and INFOSEC officers, who maintain a complete model of friendly networks and status reports on network performance. Performance and intrusion detection information is used to initiate defensive actions (e.g., alerts, rerouting, service modification, initiation of protection or recovery modes). The defensive process is continuous and dynamic, and adapts security levels and access controls to maintain and manage the level of accepted risk established at the operational level.

The flow of offensive planning activities performed by the IOSS  is organized by the three levels of planning.

• Perceptual level—The operational plan defines the intent of policy and operational objectives. The operational and perception plans, and desired behaviors of the perception targets (audiences), are defined at this level.

• Information infrastructure level—The functional measures for achieving perception goals, in the perception target’s information infrastructure, are developed at this level.

• Physical level—The specific disciplines that apply techniques (e.g., physical attack, network attack, electronic support) are tasked at this level.

The IOSS performs the decide function of the OODA loop for information operations, and the functional flow is organized to partition both the observe/orient function that provides inputs and the operational orders (OPORDS) that initiate the attack actions. The sequence of planning activities proceeds from the perceptual to the physical level, performing the flow-down operations defined in the following subsections.

3.3.1 Perception Operations The operational objectives and current situation are used to develop the desired perception objectives, which are balanced with all other operational objectives.

3.3.2 Information Infrastructure Operations At this level, the targeted information infrastructure (II) (at all ISO levels) is analyzed and tactics are developed to achieve the attack objectives by selecting the elements of the II to be attacked (targeted). The product is a prioritized high-value target (HVT) list. Using nodal analysis, targets are nominated for attack by the desired functional effect to achieve flowed-down objectives: denial, disruption, deceit, exploitation, or destruction.

Once the analysis develops an optimized functional model of an attack approach that achieves the objectives, weapons (techniques) are selected to accomplish the functional effects. This weaponeering process pairs the techniques (weapons) to targets (e.g., links, nodes, processing, individual decision makers). It also considers the associated risks due to attack detection and collateral damage and assigns intelligence collection actions necessary to perform BDA to verify the effectiveness of the attack.

3.3.3 Physical Level At the physical level, the attacking disciplines plan and execute the physical-level attacks.

Section 4 Command Relationships

4.3 Intelligence Support

IOSS requires intelligence support to detect, locate, characterize, and map the threat-critical infrastructure at three levels.

Section 5 Security 5.1 General

IO staff operations will be implemented and operated at multiple levels of security (MLS). Security safeguards consist of administrative, procedural, physical, operational, and/or environmental, personnel, and communications security; emanation security; and computer security (i.e., hardware, firmware, network, and software), as required.

Section 6 Training

6.1 General

Training is the key to successful integration of IO into joint military operations. Training of IO battle staff personnel is required at the force and unit levels, and is a complex task requiring mastery of the related disciplines of intelligence, OPSEC, PSYOPS, deception, electronic warfare, and destruction.

6.2 Formal Training

The fielding and operation of an IO cell or battle staff may require formal courses or unit training for the diverse personnel required. Training audiences include instructors, IO operators, IO battle staff cadre, system support, a broad spectrum of instructors in related disciplines, and senior officers.

7.2 Select Bibliography

Command and Control Warfare Policy

CJCSI 3210.01, Joint Information Warfare Policy, Jan. 2, 1996. DOD Directive S-3600.1, Information Warfare.

CJCSI 3210.03, Joint Command and Control Warfare Policy (U), Mar. 31, 1996.
JCS Pub 3-13.1, Joint Command and Control Warfare (C2W) Operations, Feb. 7, 1996.

Information Operations

“Information Operations,” Air Force Basic Doctrine (DRAFT), Aug. 15, 1995. FM-100-6, Information Operations, Aug. 27, 1997.
TRADOC Pam 525-69, Concept for Information Operations, Aug. 1, 1995.

Intelligence

Joint Pub 2-0, Doctrine for Intelligence Support to Joint Operations, May 5, 1995. AFDD 50, Intelligence, May 1996.
FM 34-130, Intelligence Preparation of the Battlefield, July 8, 1994.

PSYOPS, Civil and Public Affairs

JCS Pub 3-53, “Doctrine for Joint Psychological Operations,” AFDD 2.5-5, Psychological Operations, Feb. 1997.

FM 33-1, Psychological Operations, Feb. 18, 1993. FM 41-10, Civil Affairs Operations, Jan. 11, 1993. FM 46-1, Public Affairs Operations, July 23, 1992.

Operational Deception

CJCSI 3211.01, Joint Military Deception, June 1, 1993.
JCS Pub 3-58, Joint Doctrine for Operational Deception, June 6, 1994.
AR 525-21, Battlefield Deception Policy, Oct. 30, 1989.
FM 90-2, Battlefield Deception [Tactical Cover and Deception], Oct. 3, 1988. FM 90-2A (C), Electronic Deception, June 12, 1989.

Information Attack

FM 34-1, Intelligence and Electronic Warfare Operations, Sept. 27, 1994.

FM 34-37, Echelon Above Corps Intelligence and Electronic Warfare Operations, Jan. 15, 1991.

FM 34-36, Special Intelligence Forces Intelligence and Electronic Warfare Operations, Sept. 30, 1991.

Operational Security (OPSEC)

DOD Directive 5205.2, Operations Security Program, July 7, 1983. Joint Pub No. 3-54 Joint Doctrine for Operations Security.

AFI 10-1101, (Air Force), Operational Security Instruction.

AR 530-1, (Army) Operations Security, Mar. 3, 1995.

Information Security (INFOSEC)

DoD 5200.1-R, Information Security Program Regulation. AFPD 31-4, (Air Force) Information Security, Aug. 1997.
AR 380-19, (Army) Information System Security, Aug. 1, 1990.

8
Offensive Information Operations

This chapter introduces the functions, tactics, and techniques of malevolence against information systems. Offensive information operations target human perception, information that influences perception, and the physical world that is perceived. The avenues of these operations are via perceptual, information, and physical means.

Offensive information operations are malevolent acts conducted to meet the strategic, operational, or tactical objectives of authorized government bodies; legal, criminal, or terrorist organizations; corporations; or individuals. The operations may be legal or illegal, ethical or unethical, and may be conducted by authorized or unauthorized individuals. The operations may be performed covertly, without notice by the target, or they may be intrusive, disruptive, and even destructive. The effects on information may bring physical results that are lethal to humans.

Offensive operations are uninvited, unwelcome, unauthorized, and detrimental to the target; therefore, we use the term attack to refer to all of these operations.

security design must be preceded by an understanding of the attacks it must face.

Offensive information attacks have two basic functions: to capture or to affect information. (Recall that information may refer to processes or to data/information/knowledge content.) These functions are performed together to achieve the higher level operational and perceptual objectives. In this chapter, we introduce the functions, measures, tactics, and techniques of offensive operations.

  • Functions—The fundamental functions (capture and affect) are used to effectively gain a desired degree of control of the target’s information resources. Capturing information is an act of theft of a resource if captured illegally, or technical exploitation if the means is not illicit. The object of capture may be, for example, a competitor’s data, an adversary’s processed information, another’s electronic cash (a knowledgelevel resource with general liquidity), or conversations that provide insight into a target’s perception. Affecting information is an act of intrusion with intent to cause unauthorized effects, usually harmful to the information owner. The functional processes that capture and affect information are called offensive measures, designed to penetrate operational and defensive security measures of the targeted information system.
  • Tactics—The operational processes employed to plan, sequence, and control the countermeasures of an attack are the attack tactics. These tactics consider tactical factors, such as attack objectives; desired effects (e.g., covertness; denial or disruption of service; destruction, modification, or theft of information); degree of effects; and target vulnerabilities.
  • Techniques—The technical means of capturing and affecting information of humans—their computers, communications, and supporting infrastructures—are described as techniques. In addition to these dimensions, other aspects, depending upon their application, may characterize the information attacks.
  • Motive—The attacker’s motive may be varied (e.g., ideological, revenge, greed, hatred, malice, challenge, theft). Though not a technical characteristic, motive is an essential dimension to consider in forensic analysis of attacks.
  • Invasiveness—Attacks may be passive or active. Active attacks invade and penetrate the information target, while passive attacks are noninvasive, often observing behaviors, information flows, timing, or other characteristics. Most cryptographic attacks may be considered passive relative to the sender and receiver processes, but active and invasive to the information message itself.
  • Effects—The effects of attacks may vary from harassment to theft, from narrow, surgical modification of information to large-scale cascading of destructive information that brings down critical societal infrastructure.
  • Ethics and legality—The means and the effects may be legal or illegal, depending upon current laws. The emerging opportunities opened by information technology have outpaced international and U.S. federal laws to define and characterize legal attacks. Current U.S. laws, for example, limit DoD activities in peacetime. Traditional intelligence activities are allowed in peacetime (capture information), but information attacks (affect information) form a new activity (not necessarily lethal, but quite intrusive) not covered by law. Offensive information operations that affect information enable a new range of nonlethal attacks that must be described by new laws and means of authorization, even as blockades, embargoes, and special operations are treated today. These laws must define and regulate the authority for transitional conflict operations between peace and war and must cover the degree to which “affect” operations may access nonmilitary infrastructure (e.g., commercial, civilian information). The laws must also regulate the scope of approved actions, the objective, and the degree to which those actions may escalate to achieve objectives. The ethics of these attacks must also be considered, understanding how the concepts of privacy and ownership of real property may be applied to the information resource. Unlike real property, information is a property that may be shared, abused, or stolen without evidence or the knowledge of the legitimate owner.

8.1 Fundamental Elements of Information Attack

Before introducing tactics and weapons, we begin the study of offense with a complete taxonomy of the most basic information-malevolent acts at the functional level. This taxonomy of attack countermeasures may be readily viewed in an attack matrix formed by the two dimensions:

  • Target level of the IW model: perceptual, information, or physical;
  • Attack category: capture or affect.

The attack matrix (Figure 8.1) is further divided into the two avenues of approach available to the attacker:

Direct, or internal, penetration attacks—Where the attacker penetrates [1] a communication link, computer, or database to capture and exploit internal information, or to modify information (add, insert, delete) or install a malicious process;

Indirect, or external, sensor attacks—Where the attacker presents open phenomena to the system’s sensors or information to sources (e.g., media, Internet, third parties) to achieve counterinformation objectives. These attacks include insertion of information into sensors or observation of the behavior of sensors or links interconnecting fusion nodes.

In C2W, indirect attacks target the observation stage of the OODA loop, while direct attacks target the orient stage of the loop [2]. The attacker may, of course, orchestrate both of these means in a hybrid attack in which both actions are supportive of each other

Two categories of attacks that affect information are defined by the object of attack.

Content attacks—The content of the information in the system may be attacked to disrupt, deny, or deceive the user (a decision maker or process). In C2W information operations, attacks may be centered on changing or degrading the intelligence preparation of the battlefield (IPB) databases, for example, to degrade its use in a future conflict.

Temporal attacks—The information process may be affected in such a way that the timeliness of information is attacked. Either a delay in receipt of data (to delay decision making or desynchronize processes) or deceptive acceleration by insertion of false data characterizes these attacks.

8.2

The Weapons of Information Warfare

8.3.1 Network Attack Vulnerabilities and Categories

Howard has developed a basic taxonomy of computer and network attacks for use in analyzing security incidents on the Internet [5]. The taxonomy structure is based on characterizing the attack process (Figure 8.2) by five basic components that characterize any attack.

  1. Attackers—Six categories of attackers are identified (and motivations are identified separately, under objectives): hackers, spies, terrorists, corporate, professional criminals, and vandals.
  2. Tools—The levels of sophistication of use of tools to conduct the attack are identified.
  3. Access—The access to the system is further categorized by four branches.

Vulnerability exploited—Design, configuration (of the system), and implementation (e.g., software errors or bugs [7]) are all means of access that may be used.

Level of intrusion—The intruder may obtain unauthorized access, but may also proceed to unauthorized use, which has two possible subcategories of use.

 

Use of processes—The specific process or service used by the unauthorized user is identified as this branch of the taxonomy (e.g., SendMail, TCP/IP).

Use of information—Static files in storage or data in transit may be the targets of unauthorized use.

  1. Results—Four results are considered: denial or theft of service, or corruption or theft (disclosure) of information.
  2. Objectives—Finally, the objective of the attack (often closely correlated to the attacker type) is the last classifying property.

(This taxonomy is limited to network attacks using primarily information layer means, and can be considered a more refined categorization of the attacks listed in the information-layer row of the attack matrix presented earlier in Section 8.1.)

/IP).

8.4 Command and Control Warfare Attack Tactics

In military C2W, the desired attack effects are degradation of the opponent’s OODA loop operations (ineffective or untimely response), disruption of decision-making processes, discovery of vulnerabilities, damage to morale, and, ultimately, devastation of the enemy’s will to fight.

Command and control warfare has often been characterized as a war of OODA loops where the fastest, most accurate loop will issue the most effective actions [20]. The information-based warfare concepts introduced in Chapter 3 (advanced sensors, networks, and fusion systems) speed up the loop, improving information accuracy (content), visualization and dissemination (delivery), and update rates (timeliness).

Offensive information operations exploit the vulnerabilities described here and in Section 8.3.1.

Attacks exploit vulnerabilities in complex C4I systems to counter security and protection measures, as well as common human perceptual, design, or configuration vulnerabilities that include the following:

  • Presumption of the integrity of observations and networked reports;
  • Presumption that observation conflicts are attributable only to measurement error;
  • Presumption that lack of observation is equivalent to nondetection rather than denial;
  • Absence of measures to attribute conflict or confusion to potential multisource denial and spoofing.

Four fusion-specific threat mechanisms can be defined to focus information on the fusion process:

  • Exploitation threats seek to utilize the information obtained by fusion systems or the fusion process itself to benefit the adversary. Information that can be captured from the system covertly can be used to attack the system, to monitor success of IW attacks, or to support other intelligence needs.
  • Deception threats to fusion systems require the orchestration of multiple stimuli and knowledge of fusion processes to create false data and false fusion decisions, with the ultimate goal of causing improper decisions by fusion system users. Deception of a fusion system may be synchronized with other deception plots, including PSYOPS and military deceptions to increase confidence in the perceived plot.
  • Disruption of sensor fusion systems denies the fusion process the necessary information availability or accuracy to provide useful decisions. Jamming of sensors, broadcast floods to networks, overloads, and soft or temporary disturbance of selected links or fusion nodes are among the techniques employed for such disruption.
  • Finally, softand hard-kill destruction threats include a wide range of physical weapons, all of which require accurate location and precision targeting of the fusion node.

The matrix provides a tool to consider each individual category of attack against each element of the system.

8.5 IW Targeting and Weaponeering Considerations

Structured information strikes (netwar or C2W) require functional planning before coordinating tactics and weapons for all sorties at the perceptual, information, and physical levels. The desired effects, whether a surgical strike on a specific target or cascading effects on an infrastructure, must be defined and the uncertainty in the outcome must also be determined. Munitions effects, collateral damage, and means of verifying the functional effects achieved must be considered, as in physical military attacks.

8.8 Offensive Operations Analysis, Simulation, and War Gaming

The complexity of structured offensive information operations and the utility of their actions on decision makers is not fully understood or completely modeled. Analytic models, simulations, and war games will provide increasing insight into the effectiveness of these unproven means of attack. Simulations and war games must ultimately evaluate the utility of complex, coordinated, offensive information operations using closed loop models (Figure 8.11) that follow the OODA loop structure presented in earlier chapters to assess the influence of attacks on networks, information systems, and decision makers.

Measures of performance and effectiveness are used to assess the quantitative effectiveness of IW attacks (or the effectiveness of protection measures to defend against them). The measures are categorized into two areas.

• Performance metrics quantify specific technical values that measure the degree to which attack mechanisms affect the targeted information source, storage, or channel.

• Effectiveness metrics characterize the degree to which IW objectives impact the mission functions of the targeted system.

8.9 Summary

The wide range of offensive operations, tactics, and weapons that threaten information systems demand serious attention to security and defense. The measures described in this chapter are considered serious military weapons. The U.S. director of central intelligence (DCI) has testified that these weapons must be considered with other physical weapons of mass destruction, and that the electron should be considered the ultimate precision guided weapon [65].

9
Defensive Information Operations

This chapter provides an overview of the defensive means to protect the information infrastructure against the attacks enumerated in the last chapter. Defensive IO measures are referred to as information assurance.

Information operations that protect and defend information and information systems by ensuring their availability, integrity, authentication, confidentiality, and nonrepudiation. This includes providing for the restoration of information systems by incorporating protection, detection, and reaction capabilities.

assurance includes the following component properties and capabilities:

  • Availability provides assurance that information, services, and resources will be accessible and usable when needed by the user.
  • Integrity assures that information and processes are secure from unauthorized tampering (e.g., insertion, deletion, destruction, or replay of data) via methods such as encryption, digital signatures, and intrusion detection.
  • Authentication assures that only authorized users have access to information and services on the basis of controls: (1) authorization (granting and revoking access rights), (2) delegation (extending a portion of one entity’s rights to another), and (3) user authentication (reliable corroboration of a user, and data origin. (This is a mutual property when each of two parties authenticates the other.)
  • Confidentiality protects the existence of a connection, traffic flow, and information content from disclosure to unauthorized parties.
  • Nonrepudiation assures that transactions are immune from false denial of sending or receiving information by providing reliable evidence that can be independently verified to establish proof of origin and delivery.
  • Restoration assures information and systems can survive an attack and that availability can be resumed after the impact of an attack.

While these asymmetric threats (e.g., lone teenager versus large corporation or DoD) have captured significant attention, they do not pose the more significant threat that comes in two areas.

• Internal threats (structured or unstructured)—Any insider with access to the targeted system poses a serious threat. Perverse insiders, be they disgruntled employees, suborned workers, or inserted agents, pose an extremely difficult and lethal threat. Those who have received credentials for system access (usually by a process of background and other assessments) are deemed trustworthy. Protection from malicious acts by these insiders requires high-visibility monitoring of activities (a deterrent measure), frequent activity audits, and high-level physical security and internal OPSEC procedures (defensive measures). While continuous or periodic malicious actions may be detected by network behavior monitoring, the insider inserted to perform a single (large) destructive act is extremely difficult to detect before that act. OPSEC activities provide critical protection in these cases, due to the human nature of the threat. This threat is the most difficult, and it’s risk should not be understated because of the greater attention often paid to technical threats.

• Structured external threatsAttackers with deep technical knowledge of the target, strong motivation, and the capability to mount combination attacks using multiple complex tactics and techniques also pose a serious threat. These threats may exploit subtle, even transitory, network vulnerabilities (e.g., configuration holes) and apply exhaustive probing and attack paths to achieve their objectives. While most computer vulnerabilities can be readily corrected, the likelihood that all computers in a network will have no vulnerabilities exposed at any given time is not zero. Structured attackers have the potential to locate even transient vulnerabilities, to exploit the momentary opportunity to gain access, and then expand the penetration to achieve the desired malevolent objective of attack.

9.1 Fundamental Elements of Information Assurance

The definition of information assurance includes six properties, of which three are considered to be the fundamental properties from which all others derive [20].

• Confidentiality (privacy)—Assuring that information (internals) and the existence of communication traffic (externals) will be kept secret, with access limited to appropriate parties;

  • Integrity—Assuring that information will not be accidentally or maliciously altered or destroyed, that only authenticated users will have access to services, and that transactions will be certified and unable to be subsequently repudiated (the property of nonrepudiation);
  • Availability—Assuring that information and communications services will be ready for use when expected (includes reliability, the assurance that systems will perform consistently and at an acceptable level of quality; survivability, the assurance that service will exist at some defined level throughout an attack; and restoration to full service following an attack).

These fundamentals meet the requirements established for the U.S. NII [21] and the international community for the GII.

9.2 Principles of Trusted Computing and Networking

Traditional INFOSEC measures applied to computing provided protection from the internal category of attacks.

For over a decade, the TCSEC standard has defined the criteria for four divisions (or levels) of trust, each successively more stringent than the level preceding it.

  • D: Minimal protection—Security is based on physical and procedural controls only; no security is defined for the information system.
  • C: Discretionary protection—Users (subjects), their actions, and data (objects) are controlled and audited. Access to objects is restricted based upon the identify of subjects.
  • B: Mandatory protection—Subjects and objects are assigned sensitivity labels (that identify security levels) that are used to control access by an independent reference monitor that mediates all actions by subjects.
  • A: Verified protection—Highest level of trust, which includes formal design specifications and verification against the formal security model.

The TCSEC defines requirements in four areas: security policy, accountability, assurance, and documentation.

Most commercial computer systems achieve C1 or C2 ratings, while A and B ratings are achieved only by dedicated security design and testing with those ratings as a design objective.

Networks pose significant challenges to security.

  • Heterogeneous systems—The variety of types and configurations of systems (e.g., hardware platforms, operating systems), security labeling, access controls, and protocols make security analysis and certification formidable.
  • Path security—Lack of control over communication paths through the network may expose data packets to hostile processes.
  • Complexity—The complexity of the network alone provides many opportunities for design, configuration, and implementation vulnerabilities (e.g., covert channels) while making comprehensive analysis formidable.

Trusted networks require the properties already identified, plus three additional property areas identified in the TNI.

  1. Communications integrity—Network users must be authenticated by secure means that prevent spoofing (imitating a valid user or replaying a previously sent valid message). The integrity of message contents must be protected (confidentiality), and a means must be provided to prove that a message has been sent and received (nonrepudiation).
  2. Protection from service denial—Networks must sustain attacks to deny service by providing a means of network management and monitoring to assure continuity of service.
  3. Compromise protection services—Networks must also have physical and information structure protections to maintain confidentiality of the traffic flow (externals) and message contents (internals). This requirement also imposes selective routing capabilities, which permit control of the physical and topological paths that network traffic traverse.

The concept of “layers” of trust or security is applied to networks, in which the security of each layer is defined and measures are taken to control access between layers and to protect information transferred across the layers.

9.3 Authentication and Access Control

The fundamental security mechanism of single or networked systems is the control of access to authentic users. The process of authentication requires the user to verify identity to establish access, and access controls restrict the processes that may be performed by the authenticated user or users attempting to gain authentication.

9.3.1 Secure Authentication and Access Control Functions

Authentication of a user in a secure manner requires a mechanism that verifies the identity of the requesting user to a stated degree of assurance.

Remote authentication and granting of network access is similar to the functions performed by military identification friend or foe (IFF) systems, which also require very high authentication rates. In network systems, as in IFF, cryptographic means combined with other properties provide high confidence and practical authentication. A variety of methods combining several mechanisms into an integrated system is usually required to achieve required levels of security for secure network applications.

 

 

9.5 Incident Detection and Response

Extensive studies of network intrusion detection have documented the technical challenge of achieving comprehensive detection on complex networks. There are several technical approaches to implementing detection mechanisms including the following:

  1. Known pattern templates—Activities that follow specific sequences (e.g., attempts to exploit a known vulnerability, repeated password attacks, virus code signatures, etc.) of identified threats may be used to detect incidents. For example, Courtney, a detection program developed by the Lawrence Livermore National Laboratory, specifically detects the distinctive scan pattern of the SATAN vulnerability scanning tool.
  2. Threatening behavior templatesActivities that follow general patterns that may jeopardize security are modeled and applied as detection criteria. Statistical, neural network, and heuristic detection mechanisms can detect such general patterns, but the challenge is to maintain an acceptably low false alarm rate with such general templates.
  3. Traffic analysisNetwork packets are inspected within a network to analyze source and destination as an initial filter for suspicious access activities. If packets are addressed to cross security boundaries, the internal packet contents are inspected for further evidence of intrusive or unauthorized activity (e.g., outgoing packets may be inspected for keywords, data contents; inbound packets for executable content).
  4. State-based detectionChanges in system states (i.e., safe or “trusted” to unsafe transitions as described in Section 9.2) provide a means of detecting vulnerable actions.

 

Responses to incident detections can range from self-protective measures (terminate offending session and modify security policy) to offensive reactions, if the source of attack can be identified. In order to identify attackers, entrapment measures that are used include the deliberate insertion of an apparent security hole into a system.

In order to identify attackers, entrapment measures that are used include the deliberate insertion of an apparent security hole into a system. The intruder is seduced (through the entrapment hole) into a virtual system (often called the “honey pot”) that appears to be real and allows the intruder to carry out an apparent attack while the target system “observes” the attack. During this period, the intruder’s actions are audited and telecommunication tracing activities can be initiated to identify the source of the attack. Some firewall products include such entrapment mechanisms, presenting common or subtle security holes to attackers’ scanners to focus the intruder’s attention on the virtual system.

In addition to technical detection and response for protection, conventional investigative responses to identify and locate network or electronic attack intruders are required for deterrence (e.g., to respond with criminal prosecution or military reprisal). Insight into the general methodology for investigating ongoing unstructured attacks on networks is provided by a representative response that was performed in 1994 by the Air Force Computer Emergency Response Team (AFCERT) from the U.S. Information Warfare Center [47].

  1. Auditing—Analyze audit records of attack activities and determine extent of compromise. The audit records of computer actions and telecommunication transmissions must be time-synchronized to follow the time sequence of data transactions from the target, through intermediate network systems, to the attacker. (Audit tracking is greatly aided by synchronization of all telecommunication and network logging to common national or international time standards.)
  2. Content monitoring—Covertly monitor the content of ongoing intrusion actions to capture detailed keystrokes or packets sent by the attacker in these attacks.
  3. Context monitoring—Remotely monitor Internet traffic along the connection path to determine probable telecommunication paths from source to target. This monitoring may require court-ordered “trap and trace” techniques applied to conventional telecommunication lines.
  4. End-game search—Using evidence about likely physical or cyber location and characteristics of the attacker, other sources (HUMINT informants, OSINT, other standard investigative methods) are applied to search the reduced set of candidates to locate the attacker.

 

9.6 Survivable Information Structures

Beyond the capabilities to detect and respond to attacks is the overall desired property of information system survivability to provide the following characteristics:

• Fault tolerance—Ability to withstand attacks, gracefully degrade (rather than “crash”), and allocate resources to respond;

• Robust, adaptive response—Ability to detect the presence of a wide range of complex and subtle anomalous events (including events never before observed), to allocate critical tasks to surviving components, to isolate the failed nodes, and to develop appropriate responses in near real time;

• Distribution and variability—Distributed defenses with no singlepoint vulnerability, and with sufficient diversity in implementations to avoid common design vulnerabilities that allow single attack mechanisms to cascade to all components;

• Recovery and restoration—Ability to assess damage, plan recovery, and achieve full restoration of services and information.

Survivable systems are also defined by structure rather than properties (as above), characterizing such a system as one comprised of many individual survivable clusters that “self-extend,” transferring threat and service data to less capable nodes to improve overall health of the system

The U.S. Defense Advanced Research Projects Agency (DARPA) survivability program applies a “public health system” model that applies (1) distributed immune system detection, (2) active probing to diagnose an attack and report to the general network population, (3) reassignment of critical tasks to trusted components, (4) quarantine processes to segregate untrusted components, and (5) immunization of the general network population [52]. The DARPA program is developing the technology to provide automated survivability tools for large-scale systems.

9.7 Defense Tools and Services

System and network administrators require a variety of tools to perform security assessments (evaluation of the security of a system against a policy or standard) and audits (tracing the sequence of actions related to a specific security-relevant event)

9.9 Security Analysis and Simulation for Defensive Operations

Security analysis and simulation processes must be applied to determine the degree of risk to the system, to identify design, configuration, or other faults and vulnerabilities, and to verify compliance with the requirements of the security policy and model. Depending on the system and its application, the analysis can range from an informal evaluation to a comprehensive and exhaustive analysis.

The result of the threat and vulnerability assessment is a threat matrix that categorizes threats (by attack category) and vulnerabilities (by functions). The matrix provides a relative ranking of the likelihood of threats and the potential adverse impact of attacks to each area of vulnerability. These data form the basis for the risk assessment.

The risk management process begins by assessing the risks to the system that are posed by the risk matrix. Risks are quantified in terms of likelihood of occurrence and degree of adverse impact if they occur. On the basis of this ranking of risks, a risk management approach that meets the security requirement of the system is developed. This process may require modeling to determine the effects of various threats, measured in terms of IW MOP or MOEs, and the statistical probability of successful access to influence the system.

Security performance is quantified in terms of risk, including four components: (1) percent of attacks detected; (2) percent detected and contained; (3) percent detected, contained, and recovered; and (4) percent of residual risk.

This phase introduces three risk management alternatives.

  • Accept risk—If the threat is unlikely and the adverse impact is marginal, the risk may be accepted and no further security requirements imposed.
  • Mitigate (or manage) risk—If the risk is moderate, measures may be taken to minimize the likelihood of occurrence or the adverse impact, or both. These measures may include a combination of OPSEC, TCSEC, INFOSEC, or internal design requirements, but the combined effect must be analyzed to achieve the desired reduction in risk to meet the top-level system requirements.
  • Avoid risk—For the most severe risks, characterized by high attack likelihood or severe adverse impact, or both, a risk avoidance approach may be chosen. Here, the highest level of mitigation processes are applied (high level of security measures) to achieve a sufficiently low probability that the risk will occur in operation of the system.

When the threats and vulnerabilities are understood, the risks are quantified and measures are applied to control the balance of risk to utility to meet top-level security requirements, and overall system risk is managed.

10
The Technologies of Information Warfare

The current state of the art in information operations is based on core technologies whose performance is rapidly changing, even as information technologies (sensing, processing, storage, and communication) rapidly advance. As new technologies enable more advanced offenses and defense, emerging technologies farther on the horizon will introduce radically new implications for information warfare.

10.1 A Technology Assessment

Information warfare–related technologies are categorized both by their information operations role and by three distinct levels of technology maturity.

  • Core technologies are the current state-of-the-art, essential technologies necessary to sustain the present level of information operations.
  • Enabling technologies form the technology base for the next generation of information warfare capabilities; more than incremental improvements, they will enable the next quantum enhancement in operations.
  • Emerging technologies on the far horizon have conceptual applications when feasibility is demonstrated; they offer a significant departure from current core technologies and hold the promise of radical improvements in capability, and changes in the approach to information operations.

Developers, strategists, and decision makers who create and conduct information operations must remain abreast of a wide range of technologies to conceive the possibilities, predict performance impacts, and strategically manage development to retain leadership in this technology-paced form of warfare.

U.S. panels commissioned by the federal government and independent organizations have considered global environment as well as information technology impacts in studies of the intelligence organizational aspects of information-based warfare.

  • Preparing for the 21st Century: An Appraisal of U.S. Intelligence—An appraisal commissioned by the U.S. White House and Congress.
  • IC21—The Intelligence Community in the 21st Century—A “bottom-up” review of intelligence and future organization options by the U.S. Congress.
  • Making Intelligence Smarter: The Future of U.S. Intelligence—Report of an independent task force sponsored by the Council on Foreign Relations, February 1996.

 

10.2 Information Dominance Technologies

Three general areas characterize the information dominance technologies: collection of data, processing of the data to produce knowledge, and dissemination of the knowledge to humans.

• Collection—The first area includes the technical methods of sensing physical phenomena and the platforms that carry the sensors to carry out their mission. Both direct and remote sensing categories of sensors are included, along with the means of relaying the sensed data to users.

• Processing—The degree and complexity of automation in information systems will continue to benefit from increases in processing power (measured in operations per second), information storage capacity (in bits), and dissemination volumes (bandwidth). Processing “extensibility” technologies will allow heterogeneous nets and homogeneous clusters of hardware along with operating systems to be scaled upwards to ever-increasing levels of power. These paramount technology drivers are, of course, essential. Subtler, however, are the intelligent system technologies that contribute to system autonomy, machine understanding, and comprehension of the information we handle. Software technologies that automate reasoning at ever more complex levels will enable humans to be elevated from data-control roles to informationsupervision roles and, ultimately, to knowledge-management roles over complex systems.

• Dissemination—Communication technologies that increase bandwidth and improve the effective use of bandwidth (e.g., data, information and knowledge compression) will enhance the ability to disseminate knowledge. (Enhancements are required in terms of capacity and latency.) Presentation technologies that enhance human understanding of information (“visualization” for the human visual sense, virtual reality for all senses) by delivering knowledge to human minds will enhance the effectiveness of the humans in the dominance loop.

10.2.1 Collection Technologies

Collection technologies include advanced platforms and sensing means to acquire a greater breadth and depth of data. The collection technologies address all three domains of the information warfare model: physical, information, and perceptual variables.

10.2.2 Processing Technologies

Processing technologies address the increased volume of data collected, the increased complexity of information being processed, and the fundamental need for automated reasoning to transform data to reliable knowledge.

Integrated and Intelligent Inductive (Learning) and Deductive Decision Aids

Reasoning aids for humans applying increasingly complex reasoning (integrating symbolic and neural or genetic algorithms) will enhance the effectiveness of humans. These tools will allow individuals to reason and to make decisions on the basis of projected complex outcomes across many disciplines (e.g., social, political, military, and environmental impacts). Advances in semiotic science will contribute to practical representations of knowledge and reasoning processes for learning, deductive reasoning, and self-organization.

Computing Networks (Distributed Operating Systems) With Mediated Heterogeneous Databases

Open system computing, enabled by common object brokering protocols, will perform network computing with autonomous adaptation to allocate resources to meet user demands. Mediation agents will allow distributed heterogeneous databases across networks to provide virtual object-level database functions across multiple types of media.

Precision Geospatial Information Systems

Broad area (areas over 100,000 km2) geospatial information systems with continuous update capability will link precision (~1m) maps, terrain, features, and other spatially linked technical data for analysis and prediction.

Autonomous Information Search Agents

Goal-seeking agent software, with mobile capabilities to move across networks, will perform information search functions for human users. These agents will predict users’ probable needs (e.g., a military commander’s information needs) and will prepare knowledge sets in expectation of user queries.

Multimedia Databases (Text, Audio, Imagery, Video) Index and Retrieval

Information indexing discovery and retrieval (IIDR) functions will expand from text-based to true multimedia capabilities as object linking and portable ontology techniques integrate heterogeneous databases and data descriptions. IIRD functions will permit searches and analysis by high-level conceptual queries.

Digital Organisms

Advanced information agents, capable of adaptation, travel, and reproduction will perform a wide range of intelligent support functions for human users, including search, retrieval, analysis, knowledge creation, and conjecture.

Hypermedia Object Information Bases

Object-oriented databases with hyperlinks across all-media sources will permit rapid manipulation of large collections of media across networks.

10.2.3 Dissemination and Presentation Technologies

Dissemination technologies increase the speed with which created knowledge can be delivered, while expanding the breadth of delivery to all appropriate users. Presentation technologies address the critical problems of communicating high-dimensionality knowledge to human users efficiently and effectively, even while the human is under duress.

10.3 Offensive Technologies

Current offensive technologies (Table 10.4) are essentially manual weapons requiring human planning, targeting, control, and delivery. Enabling technologies will improve the understanding of weapon effects on large-scale networks, enabling the introduction of semiautomated controls to conduct structured attacks on networks. Integrated tools (as discussed in Chapter 7) will simulate, plan, and conduct these semiautomated attacks. Emerging technologies will expand the scope and complexity of attacks to provide large-scale network control with synchronized perception management of large populations.

Computational Sociology (Cyber PSYOPS)

Complex models of the behavior of populations and the influencing factors (e.g., perceptions of economy, environment, security) will permit effective simulation of societal behavior as a function of group perception. This capability will permit precise analysis of the influence of perception management plans and the generation of complex multiple-message PSYOPS campaigns. These tools could support the concepts of “neocortal warfare” in which national objectives are achieved without force [29,30].

10.4 Defensive Technologies

Core defensive technologies (Table 10.5) now being deployed by both the military and commercial domains provide layers of security to bridge the gap between the two approaches.

  • First generation (and expensive) military “trusted” computers based on formal analysis/testing, and dedicated secure nets with strong cryptography;
  • Commercial information technologies (computers, UNIX or Windows NT operating systems, and networks) with augmenting components (e.g., firewalls, software wrappers, smart card authentication) to manage risk and achieve a specified degree of security for operation over the nonsecure GII.

Enabling technologies will provide affordable security to complex heterogeneous networks with open system augmentations that provide layers of protection for secure “enclaves” and the networks over which they communicate.