Notes from Knowledge Management in the Intelligence Enterprise
Knowledge Management in the Intelligence Enterprise
This book is about the application of knowledge management (KM) principles to the practice of intelligence to fulfill those consumers’ expectations.
Unfortunately, too many have reduced intelligence to a simple metaphor of “connecting the dots.” This process, it seems, appears all too simple after the fact—once you have seen the picture and you can ignore irrelevant, contradictory, and missing dots. Real-world intelligence is not a puzzle of connecting dots; it is the hard daily work of planning operations, focusing the collection of data, and then processing the collected data for deep analysis to produce a flow of knowledge for dissemination to a wide range of consumers.
this book… is an outgrowth of a 2-day military KM seminar that I teach in the United States to describe the methods to integrate people, processes, and technologies into knowledge- creating enterprises.
The book progresses from an introduction to KM applied to intelligence (Chapters 1 and 2) to the principles and processes of KM (Chapter 3). The characteristics of collaborative knowledge-based intelligence organizations are described (Chapter 4) before detailing its principle craft of analysis and synthesis (Chapter 5 introduces the principles and Chapter 6 illustrates the practice). The wide range of technology tools to support analytic thinking and allow analysts to interact with information is explained (Chapter 7) before describing the automated tools that perform all-source fusion and mining (Chapter 8). The organizational, systems, and technology concepts throughout the book are brought together in a representative intelligence enterprise (Chapter 9) to illustrate the process of architecture design for a small intelligence cell. An overview of core, enabling, and emerging KM technologies in this area is provided in conclusion (Chapter 10).
Knowledge Management and Intelligence
This is a book about the management of knowledge to produce and deliver a special kind of knowledge: intelligence—that knowledge that is deemed most critical for decision making both in the nation-state and in business.
- Knowledge management refers to the organizational disciplines, processes, and information technologies used to acquire, create, reveal, and deliver knowledge that allows an enterprise to accomplish its mission (achieve its strategic or business objectives). The components of knowledge management are the people, their operations (practices and processes), and the information technology (IT) that move and transform data, information, and knowledge. All three of these components make up the entity we call the enterprise.
- Intelligence refers to a special kind of knowledge necessary to accomplish a mission—the kind of strategic knowledge that reveals critical threats and opportunities that may jeopardize or assure mission accomplishment. Intelligence often reveals hidden secrets or conveys a deep understanding that is covered by complexity, deliberate denial, or out- right deception. The intelligence process has been described as the process of the discovery of secrets by secret means. In business and in national security, secrecy is a process of protection for one party; discovery of the secret is the object of competition or security for the competitor or adversary… While a range of definitions of intelligence exist, perhaps the most succinct is that offered by the U.S. Central Intelligence Agency (CIA): “Reduced to its simplest terms, intelligence is knowledge and foreknowledge of the world around us—the prelude to decision and action by U.S. policymakers”
- The intelligence enterprise encompasses the integrated entity of people, processes, and technologies that collects and analyzes intelligence data to synthesize intelligence products for decision-making consumers.
intelligence (whether national or business) has always involved the management (acquisition, analysis, synthesis, and delivery) of knowledge.
At least three driving factors continue to make this increasing need for automation necessary. These factors include:
- Breadth of data to be considered.
- Depth of knowledge to be understood.
- Speed required for decision making.
Throughout this book, we distinguish between three levels of abstraction of knowledge, each of which may be referred to as intelligence in forms that range from unprocessed reporting to finished intelligence products
- Individual observations, measurements, and primitive messages form the lowest level. Human communication, text messages, electronic queries, or scientific instruments that sense phenomena are the major sources of data. The terms raw intelligence and evidence (data that is determined to be relevant) are frequently used to refer to elements of data.
- Information. Organized sets of data are referred to as information. The organization process may include sorting, classifying, or indexing and linking data to place data elements in relational context for subsequent searching and analysis.
- Information once analyzed, understood, and explained is knowledge or foreknowledge (predictions or forecasts). In the context of this book, this level of understanding is referred to as the intelligence product. Understanding of information provides a degree of comprehension of both the static and dynamic relationships of the objects of data and the ability to model structure and past (and future) behavior of those objects. Knowledge includes both static con- tent and dynamic processes.
These abstractions are often organized in a cognitive hierarchy, which includes a level above knowledge: human wisdom.
In this text, we consider wisdom to be a uniquely human cognitive capability—the ability to correctly apply knowledge to achieve an objective. This book describes the use of IT to support the creation of knowledge but considers wisdom to be a human capacity out of the realm of automation and computation.
1.1 Knowledge in a Changing World
This strategic knowledge we call intelligence has long been recognized as a precious and critical commodity for national leaders.
the Hebrew leader Moses commissioned and documented an intelligence operation to explore the foreign land of Canaan. That classic account clearly describes the phases of the intelligence cycle, which proceeds from definition of the requirement for knowledge through planning, tasking, collection, and analysis to the dissemination of that knowledge. He first detailed the intelligence requirements by describing the eight essential elements of information to be collected, and he described the plan to covertly enter and reconnoiter the denied area
requirements articulation, planning, collection, analysis-synthesis, and dissemination
The U.S. defense community has developed a network-centric approach to intelligence and warfare that utilizes the power of networked information to enhance the speed of command and the efficiency of operations. Sensors are linked to shooters, commanders efficiently coordinate agile forces, and engagements are based on prediction and preemption. The keys to achieving information superiority in this network-centric model are network breadth (or connectivity) and bandwidth; the key technology is information networking.
The ability to win will depend upon the ability to select and convert raw data into accurate decision-making knowledge. Intelligence superiority will be defined by the ability to make decisions most quickly and effectively—with the same information available to virtually all parties. The key enabling technology in the next century will become processing and cognitive power to rapidly and accurately convert data into com- prehensive explanations of reality—sufficient to make rapid and complex decisions.
Consider several of the key premises about the significance of knowledge in this information age that are bringing the importance of intelligence to the forefront. First, knowledge has become the central resource for competitive advantage, displacing raw materials, natural resources, capital, and labor. This resource is central to both wealth creation and warfare waging. Second, the management of this abstract resource is quite complex; it is more difficult (than material resources) to value and audit, more difficult to create and exchange, and much more difficult to protect. Third, the processes for producing knowledge from raw data are as diverse as the manufacturing processes for physical materials, yet are implemented in the same virtual manufacturing plant—the computer. Because of these factors, the management of knowledge to produce strategic intelligence has become a necessary and critical function within nations-states and business enterprises—requiring changes in culture, processes, and infrastructure to compete.
with rapidly emerging information technologies, the complexities of globalization and diverse national interests (and threats), businesses and militaries must both adopt radically new and innovative agendas to enable continuous change in their entire operating concept. Innovation and agility are the watchwords for organizations that will remain competitive in Hamel’s age of nonlinear revolution.
Business concept innovation will be the defining competitive advantage in the age of revolution. Business concept innovation is the capacity to reconceive existing business models in ways that create new value for customers, rude surprises for competitors, and new wealth for investors. Business concept innovation is the only way for newcomers to succeed in the face of enormous resource disadvantages, and the only way for incumbents to renew their lease on success
A functional taxonomy based on the type of analysis and the temporal distinction of knowledge and foreknowledge (warning, prediction, and forecast) distinguishes two primary categories of analysis and five subcategories of intelligence products
Descriptive analyses provide little or no evaluation or interpretation of collected data; rather, they enumerate collected data in a fashion that organizes and structures the data so the consumer can perform subsequent interpretation.
Inferential analyses require the analysis of collected relevant data sets (evidence) to infer and synthesize explanations that describe the mean- ing of the underlying data. We can distinguish four different focuses of inferential analysis:
- Analyses that explain past events (How did this happen? Who did it?);
- Analyses that explain the structure of current structure (What is the organization? What is the order of battle?);
- Analyses that explain current behaviors and states (What is the competitor’s research and development process? What is the status of development?);
- Foreknowledge analyses that forecast future attributes and states (What is the expected population and gross national product growth over the next 5 years? When will force strength exceed that of a country’s neighbors? When will a competitor release a new product?).
1.3 The Intelligence Disciplines and Applications
While the taxonomy of intelligence products by analytic methods is fundamental, the more common distinctions of intelligence are by discipline or consumer.
The KM processes and information technologies used in all cases are identical (some say, “bits are bits,” implying that all digital data at the bit level is identical), but the content and mission objectives of these four intelligence disciplines are unique and distinct.
Nation-state security interests deal with sovereignty; ideological, political, and economic stability; and threats to those areas of national interest. Intelligence serves national leadership and military needs by providing strategic policymaking knowledge, warnings of foreign threats to national secu- rity interests (economic, military, or political) and tactical knowledge to support day-to-day operations and crisis responses. Nation-state intelligence also serves a public function by collecting and consolidating open sources of foreign information for analysis and publication by the government on topics of foreign relations, trade, treaties, economies, humanitarian efforts, environmental concerns, and other foreign and global interests to the public and businesses at large.
Similar to the threat-warning intelligence function to the nation-state, business intelligence is chartered with the critical task of foreseeing and alerting management of marketplace discontinuities. The consumers of business intelligence range from corporate leadership to employees who access supply-chain data, and even to customers who access information to support purchase decisions.
A European Parliament study has enumerated concern over the potential for national intelligence sources to be used for nation-state economic advantages by providing competitive intelligence directly to national business interests. The United States has acknowledged a policy of applying national intelligence to protect U.S. business interests from fraud and illegal activities, but not for the purposes of providing competitive advantage
1.3.1 National and Military Intelligence
National intelligence refers to the strategic knowledge obtained for the leadership of nation-states to maintain national security. National intelligence is focused on national security—providing strategic warning of imminent threats, knowledge on the broad spectrum of threats to national interests, and fore-knowledge regarding future threats that may emerge as technologies, economies, and the global environment changes.
The term intelligence refers to both a process and its product.
The U.S. Department of Defense (DoD) provides the following product definitions that are rich in description of the processes involved in producing the product:
- The product resulting from the collection, processing, integration, analysis, evaluation, and interpretation of available information concerning foreign countries or areas;
- Information and knowledge about an adversary obtained through observation, investigation, analysis, or understanding.
Michael Herman accurately emphasizes the essential components of the intelligence process: “The Western intelligence system is two things. It is partly the collection of information by special means; and partly the subsequent study of particular subjects, using all available information from all sources. The two activities form a sequential process.”
Martin Libicki has provided a practical definition of information dominance, and the role of intelligence coupled with command and control and information warfare:
Information dominance may be defined as superiority in the generation, manipulation, and use of information sufficient to afford its possessors military dominance. It has three sources:
- Command and control that permits everyone to know where they (and their cohorts) are in the battlespace, and enables them to execute operations when and as quickly as necessary.
- Intelligence that ranges from knowing the enemy’s dispositions to knowing the location of enemy assets in real-time with sufficient precision for a one-shot kill.
- Information warfare that confounds enemy information systems at various points (sensors, communications, processing, and command), while protecting one’s own.
The superiority is achieved by gaining superior intelligence and protecting information assets while fiercely degrading the enemy’s information assets. The goal of such superiority is not the attrition of physical military assets or troops—it is the attrition of the quality, speed, and utility of the adversary’s decision-making ability.
“A knowledge environment is an organizations (business) environment that enhances its capability to deliver on its mission (competitive advantage) by enabling it to build and leverage it intellectual capital.”
1.3.2 Business and Competitive Intelligence
The focus of business intelligence is on understanding all aspects of a business enterprise: internal operations and the external environment, which includes customers and competitors (the marketplace), partners, and suppliers. The external environmental also includes independent variables that can impact the business, depending on the business (e.g., technology, the weather, government policy actions, financial markets). All of these are the objects of business intelligence in the broadest definition. But the term business intelligence is also used in a narrower sense to focus on only the internals of the business, while the term competitor intelligence refers to those aspects of intelligence that focus on the externals that influence competitiveness: competitors.
Each of the components of business intelligence has distinct areas of focus and uses in maintaining the efficiency, agility, and security of the business; all are required to provide active strategic direction to the business. In large companies with active business intelligence operations, all three components are essential parts of the strategic planning process, and all contribute to strategic decision making.
1.4 The Intelligence Enterprise
The intelligence enterprise includes the collection of people, knowledge (both internal tacit and explicitly codified), infrastructure, and information processes that deliver critical knowledge (intelligence) to the consumers. This enables them to make accurate, timely, and wise decisions to accomplish the mission of the enterprise.
This definition describes the enterprise as a process—devoted to achieving an objective for its stakeholders and users. The enterprise process includes the production, buying, selling, exchange, and promotion of an item, substance, service, or system.
the DoD three-view architecture description, which defines three interrelated perspectives or architectural descriptions that define the operational, system, and technical aspects of an enterprise [29]. The operational architecture is a people- or organization-oriented description of the operational elements, intelligence business processes, assigned tasks, and information and work flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and the tasks that are supported by these information exchanges. The systems architecture is a description of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users, and specifies system and component performance parameters. The technical architecture is the minimal set of rules (i.e., standards, protocols, interfaces, and services) governing the arrangement, interaction, and interdependence of the elements of the system.
These three views of the enterprise (Figure 1.4) describe three layers of people-oriented operations, system structure, and procedures (protocols) that must be defined in order to implement an intelligence enterprise.
The operational layer is the highest (most abstract) description of the concept of operations (CONOPS), human collaboration, and disciplines of the knowledge organization. The technical architecture layer describes the most detailed perspective, noting specific technical components and their operations, protocols, and technologies.
The intelligence supply chain that describes the flow of data into knowledge to create consumer value is measured by the value it provides to intelligence consumers. Measures of human intellectual capital and organizational knowledge describe the intrinsic value of the organization.
1.5 The State of the Art and the State of the Intelligence Tradecraft
The subject of intelligence analysis remained largely classified through the 1980s, but the 1990s brought the end of the Cold War and, thus, open publication of the fundamental operations of intelligence and the analytic methods employed by businesses and nation-states. In that same period, the rise of commercial information sources and systems produced the new disciplines of open source intelligence (OSINT) and business/competitor intelligence. In each of these areas, a wealth of resources is available for tracking the rapidly changing technology state of the art as well as the state of the intelligence tradecraft.
1.5.1 National and Military Intelligence
Numerous sources of information provide management, legal, and technical insight for national and military intelligence professionals with interests in analysis and KM
These sources include:
- Studies in Intelligence—Published by the U.S. CIA Center for the Study of Intelligence and the Sherman Kent School of Intelligence, unclassified versions are published on the school’s Web site (http://odci. gov.csi), along with periodically issued monographs on technical topics related to intelligence analysis and tradecraft.
- International Journal of Intelligence and Counterintelligence—This quarterly journal covers the breadth of intelligence interests within law enforcement, business, nation-state policymaking, and foreign affairs.
- Intelligence and National Security—A quarterly international journal published by Frank Cass & Co. Ltd., London, this journal covers broad intelligence topics ranging from policy, operations, users, analysis, and products to historical accounts and analyses.
- Defense Intelligence Journal—This is a quarterly journal published by the U.S. Defense Intelligence Agency’s Joint Military Intelligence College.
- American Intelligence Journal—Published by the National Military Intelligence Association (NMIA), this journal covers operational, organizational, and technical topics of interest to national and military intelligence officers.
- Military Intelligence Professional Bulletin—This is a quarterly bulletin of the U.S. Army Intelligence Center (Ft. Huachuca) that is available on- line and provides information to military intelligence officers on studies of past events, operations, processes, military systems, and emerging research and development.
- Jane’s Intelligence Review—This monthly magazine provides open source analyses of international military organizations, NGOs that threaten or wage war, conflicts, and security issues.
1.5.2 Business and Competitive Intelligence
Several sources focus on the specific areas of business and competitive intelligence with attention to the management, ethical, and technical aspects of collection, analysis, and valuation of products.
- Competitive Intelligence Magazine—This is a CI source for general applications-related articles on CI, published bimonthly by John Wiley & Sons with the Society for Competitive Intelligence (SCIP).
- Competitive Intelligence Review—This quarterly journal, also published by John Wiley with the SCIP, contains best-practice case studies as well as technical and research articles.
- Management International Review—This is a quarterly refereed journal that covers the advancement and dissemination of international applied research in the fields of management and business. It is published by Gabler Verlag, Germany, and is available on-line.
- Journal of Strategy and Business—This quarterly journal, published by Booz Allen and Hamilton focuses on strategic business issues, including regular emphasis on both CI and KM topics in business articles.
1.5.3 KM
The developments in the field of KM are covered by a wide range of business, information science, organizational theory, and dedicated KM sources that pro- vide information on this diverse and fast growing area.
- CIO Magazine—This monthly trade magazine for chief information officers and staff includes articles on KM, best practices, and related leadership topics.
- Harvard Business Review, Sloan Management Review—These management journals cover organizational leadership, strategy, learning and change, and the application of supporting ITs.
- Journal of Knowledge Management—This is a quarterly academic journal of strategies, tools, techniques, and technologies published by Emerald (UK). In addition, Emerald also publishes quarterly The Learning Organization—An International Journal.
- IEEE Transactions of Knowledge and Data Engineering—This is an archival journal published bimonthly to inform researchers, developers, managers, strategic planners, users, and others interested in state-of- the-art and state-of-the-practice activities in the knowledge and data engineering area.
- Knowledge and Process Management—A John Wiley (UK) journal for executives responsible for leading performance improvement and con- tributing thought leadership in business. Emphasis areas include KM, organizational learning, core competencies, and process management.
- American Productivity and Quality Center (APQC)—THE APQC is a nonprofit organization that provides the tools, information, expertise, and support needed to discover and implement best practices in KM. Its mission is to discover, research, and understand emerging and effective methods of both individual and organizational improvement, to broadly disseminate these findings, and to connect individuals with one another and with the knowledge, resources, and tools they need to successfully manage improvement and change. They maintain an on-line site at www.apqc.org.
- Data Mining and Knowledge Discovery—This Kluwer (Netherlands) journal provides technical articles on the theory, techniques, and practice of knowledge extraction from large databases.
1.6 The Organization of This Book
This book is structured to introduce the unique role, requirements, and stake- holders of intelligence (the applications) before introducing the KM processes, technologies, and implementations.
2
The Intelligence Enterprise
Intelligence, the strategic information and knowledge about an adversary and an operational environment obtained through observation, investigation, analysis, or understanding, is the product of an enterprise operation that integrates people and processes in a organizational and networked computing environment.
The intelligence enterprise exists to produce intelligence goods and service—knowledge and foreknowledge to decision- and policy-making customers. This enterprise is a production organization whose prominent infrastructure is an information supply chain. As in any business, it has a “front office” to manage its relations with customers, with the information supply chain in the “back office.”
The intellectual capital of this enterprise includes sources, methods, workforce competencies, and the intelligence goods and services produced. As in virtually no other business, the protection of this capital is paramount, and therefore security is integrated into every aspect of the enterprise.
2.1 The Stakeholders of Nation-State Intelligence
The intelligence enterprise, like any other enterprise providing goods and services, includes a diverse set of stakeholders in the enterprise operation. The business model for any intelligence enterprise, as for any business, must clearly identify the stakeholders who own the business and those who produce and consume its goods and services.
- The owners of the process include the U.S. public and its elected officials, who measure intelligence value in terms of the degree to which national security is maintained. These owners seek awareness and warning of threats to prescribed national interests.
- Intelligence consumers (customers or users) include national, military, and civilian user agencies that measure value in terms of intelligence contribution to the mission of each organization, measured in terms of its impact on mission effectiveness.
- Intelligence producers, the most direct users of raw intelligence, include the collectors (HUMINT and technical), processor agencies, and analysts. The principal value metrics of these users are performance based: information accuracy, coverage breadth and depth, confidence, and timeliness.
The purpose and value chains for intelligence (Figure 2.2) are defined by the stakeholders to provide a foundation for the development of specific value measures that assess the contribution of business components to the overall enterprise. The corresponding chains in the U.S. IC include:
- Source—the source or basis for defining the purpose of intelligence is found in the U.S. Constitution, derivative laws (i.e., the National Security Act of 1947, Central Intelligence Agency Act of 1949, National Security Agency Act of 1959, Foreign Intelligence Surveillance Act of 1978, and Intelligence Organization Act of 1992), and orders of the executive branch [2]. Derived from this are organizational mission documents, such as the Director of Central Intelligence (DCI) Strategic Intent [3], which documents communitywide purpose and vision, as well as derivative guidance documents prepared by intelligence providers.
- Purpose chain—the causal chain of purposes (objectives) for which the intelligence enterprise exists. The ultimate purpose is national security, enabled by information (intelligence) superiority that, in turn, is enabled by specific purposes of intelligence providers that will result in information superiority.
- Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
- Measures—Specific metrics by which values are quantified and articulated by stakeholders and by which the value of the intelligence enterprise is evaluated.
In a similar fashion, business and competitive intelligence have stakeholders that include customers, shareholders, corporate officers, and employees… there must exist a purpose and value chain that guides the KM operations. These typically include:
- Source—the business charter and mission statement of a business elaborates the market served and the vision for the businesses role in that market.
- Purpose chain—the objectives of the business require knowledge about internal operations and the market (BI objectives) as well as competitors (CI).
- Value chain—the chain of values (goals) by which achievement of the enterprise purpose is measured.
- Measures—Specific metrics by which values are quantified. A balanced set of measures includes vision and strategy, customer, internal, financial, and learning-growth metrics.
2.2 Intelligence Processes and Products
The process that delivers strategic and operational intelligence products is gener- ally depicted in cyclic form (Figure 2.3), with five distinct phases.
In every case, the need is the basis for a logical process to deliver the knowledge to the requestor.
- Planning and direction. The process begins as policy and decision makers define, at a high level of abstraction, the knowledge that is required to make policy, strategic, or operational decisions. The requests are parsed into information required, then to data that must be collected to estimate or infer the required answers. Data requirements are used to establish a plan of collection, which details the elements of data needed and the targets (people, places, and things) from which the data may be obtained.
- Collection. Following the plan, human and technical sources of data are tasked to collect the required raw data. The next section introduces the major collection sources, which include both openly available and closed sources that are accessed by both human and technical methods.
These sources and methods are among the most fragile [5]—and most highly protected—elements of the process. Sensitive and specially compartmented collection capabilities that are particularly fragile exist across all of the collection disciplines.
- Processing. The collected data is processed (e.g., machine translation, foreign language translation, or decryption), indexed, and organized in an information base. Progress on meeting the requirements of the col- lection plan is monitored and the tasking may be refined on the basis of received data.
- All-source analysis-synthesis and production. The organized information base is processed using estimation and inferential (reasoning) techniques that combine all-source data in an attempt to answer the requestor’s questions. The data is analyzed (broken into components and studied) and solutions are synthesized (constructed from the accumulating evidence). The topics or subjects (intelligence targets) of study are modeled, and requests for additional collection and processing may be made to acquire sufficient data and achieve a sufficient level of understanding (or confidence to make a judgment) to answer the consumer’s questions.
- Dissemination. Finished intelligence is disseminated to consumers in a variety of formats, ranging from dynamic operating pictures of war- fighters’ weapon systems to formal reports to policymakers. Three categories of formal strategic and tactical intelligence reports are distinguished by their past, present, and future focus: current intelligence reports are news-like reports that describe recent events or indications and warnings, basic intelligence reports provide complete descriptions of a specific situation (e.g., order of battle or political situation), and intelligence estimates attempt to predict feasible future outcomes as a result of current situation, constraints, and possible influences [6].
Though introduced here in the classic form of a cycle, in reality the process operates as a continuum of actions with many more feedback (and feedforward) paths that require collaboration between consumers, collectors, and analysts.
2.3 Intelligence Collection Sources and Methods
A taxonomy of intelligence data sources includes sources that are openly accessible or closed (e.g., denied areas, secured communications, or clandestine activities). Due to the increasing access to electronic media (i.e., telecommunications, video, and computer networks) and the global expansion of democratic societies, OSINT is becoming an increasingly important source of global data. While OSINT must be screened and cross validated to filter errors, duplications, and deliberate misinformation (as do all sources), it provides an economical source of public information and is a contributor to other sources for cueing, indications, and confirmation
Measurements and signatures intelligence (MASINT) is technically derived knowledge from a wide variety of sensors, individual or fused, either to perform special measurements of objects or events of interest or to obtain signatures for use by the other intelligence sources. MASINT is used to characterize the observable phenomena (observables) of the environment and objects of surveillance.
U.S. intelligence studies have pointed out specific changes in the use of these sources as the world increases globalization of commerce and access to social, political, economic, and technical information [10–12]:
- The increase in unstructured and transnational threats requires the robust use of clandestine HUMINT sources to complement extensive technical verification means.
- Technical means of collection are required for both broad area coverage and detailed assessment of the remaining denied areas of the world.
2.3.1 HUMINT Collection
HUMINT refers to all information obtained directly from human sources
HUMINT sources may be overt or covert (clandestine); the most common categories include:
- Clandestine intelligence case officers. These officers are own-country individuals who operate under a clandestine “cover” to collect intelligence and “control” foreign agents to coordinate collections.
- Agents. These are foreign individuals with access to targets of intelligence who conduct clandestine collection operations as representatives of their controlling intelligence officers. These agents may be recruited or “walk-in” volunteers who act for a variety of ideological, financial, or personal motives.
- Émigrés, refugees, escapees, and defectors. The open, overt (yet discrete) programs to interview these recently arrived foreign individuals provide background information on foreign activities as well as occasional information on high-value targets.
- Third party observers. Cooperating third parties (e.g., third-party countries and travelers) can also provide a source of access to information.
The HUMINT discipline follows a rigorous process for acquiring, employing, and terminating the use of human assets that follows a seven-step sequence. The sequence followed by case officers includes:
- Spotting—locating, identifying, and securing low-level contact with agent candidates;
- Evaluation—assessment of the potential (i.e., value or risk) of the spotted individual, based on a background investigation;
- Recruitment—securing the commitment from the individual;
- Testing—evaluation of the loyalty of the agent;
- Training—supporting the agent with technical experience and tools;
- Handling—supporting and reinforcing the agent’s commitment;
- Termination—completion of the agent assignment by ending the relationship.
HUMINT is dependent upon the reliability of the individual source, and lacks the collection control of technical sensors. Furthermore, the level of security to protect human sources often limits the fusion of HUMINT reports with other sources and the dissemination of wider customer bases. Directed high-risk HUMINT collections are generally viewed as a precious resource to be used for high-value targets to obtain information unobtainable by technical means or to validate hypotheses created by technical collection analysis.
2.3.2 Technical Intelligence Collection
Technical collection is performed by a variety of electronic (e.g., electromechanical, electro-optical, or bioelectronic) sensors placed on platforms in space, the atmosphere, on the ground, and at sea to measure physical phenomena (observables) related to the subjects of interest (intelligence targets).
The operational utility of these collectors for each intelligence application depends upon several critical factors:
- Timeliness—the time from collection of event data to delivery of a tactical targeting cue, operational warnings and alerts, or formal strategic report;
- Revisit—the frequency with which a target of interest can be revisited to understand or model (track) dynamic behavior;
- Accuracy—the spatial, identity, or kinematic accuracy of estimates and predictions;
- Stealth—the degree of secrecy with which the information is gathered and the measure of intrusion required.
2.4 Collection and Process Planning
The technical collection process requires the development of a detailed collection plan, which begins with the decomposition of the subject target into activities, observables, and then collection requirements.
From this plan, technical collectors are tasked and data is collected and fused (a composition, or reconstruction that is the dual of the decomposition process) to derive the desired intelligence about the target.
2.5 KM in the Intelligence Process
The intelligence process must deal with large volumes of source data, converting a wide range of text, imagery, video, and other media types into organized information, then performing the analysis-synthesis process to deliver knowledge in the form of intelligence products.
IT is providing increased automation of the information indexing, discovery, and retrieval (IIDR) functions for intelligence, especially the exponentially increasing volumes of global open-source data.
The functional information flow in an automated or semiautomated facility (depicted in Figure 2.5) requires digital archiving and analysis to ingest continu- ous streams of data and manage large volumes of analyzed data. The flow can be broken into three phases:
- Capture and compile;
2. Preanalysis;
3. Exploitation (analysis-synthesis).
The preanalysis phase indexes each data item (e.g., article, message, news segment, image, book or chapter) by assigning a reference for storage; generating an abstract that summarizes the content of the item and metadata with a description of the source, time, reliability-confidence, and relationship to other items (abstracting); and extracting critical descriptors of content that characterize the contents (e.g., keywords) or meaning (deep indexing) of the item for subsequent analysis. Spatial data (e.g., maps, static imagery, or video imagery) must be indexed by spatial context (spatial location) and content (imagery content).
The indexing process applies standard subjects and relationships, maintained in a lexicon and thesaurus that is extracted from the analysis information base. Fol- lowing indexing, data items are clustered and linked before entry into the analy- sis base. As new items are entered, statistical analyses are performed to monitor trends or events against predefined templates that may alert analysts or cue their focus of attention in the next phase of processing.
The categories of automated tools that are applied to the analysis information base include the following tools:
- Interactive search and retrieval tools permit analysts to search by content, topic, or related topics using the lexicon and thesaurus subjects.
- Structured judgment analysis tools provide visual methods to link data, synthesize deductive logic structures, and visualize complex relation- ships between data sets. These tools enable the analyst to hypothesize, explore, and discover subtle patterns and relationships in large data volumes—knowledge that can be discerned only when all sources are viewed in a common context.
- Modeling and simulation tools model hypothetical activities, allowing modeled (expected) behavior to be compared to evidence for validation or projection of operations under scrutiny.
- Collaborative analysis tools permit multiple analysts in related subject areas, for example, to collaborate on the analysis of a common subject.
- Data visualization tools present synthetic views of data and information to the analyst to permit patterns to be examined and discovered.
2.6 Intelligence Process Assessments and Reengineering
The U.S. IC has been assessed throughout and since the close of the Cold War to study the changes necessary to adapt to advanced collection capabilities, changing security threats, and the impact of global information connectivity and information availability. Published results of these studies provide insight into the areas of intelligence effectiveness that may be enhanced by organizing the community into a KM enterprise. We focus here on the technical aspects of the changes rather than the organizational aspects recommended in numerous studies.
2.6.1 Balancing Collection and Analysis
Intelligence assessments have evaluated the utility of intelligence products and the balance of investment between collection and analysis.
2.6.2 Focusing Analysis-Synthesis
An independent study [21] of U.S. intelligence recommended a need for intelligence to sharpen the focus of analysis-synthesis resources to deal with the increased demands by policymakers for knowledge on a wider ranges of topics, the growing breadth of secret and open sources, and the availability of commercial open-source analysis.
2.6.3
Balancing Analysis-Synthesis Processes
One assessment conducted by the U.S. Congress reviewed the role of analysis- synthesis and the changes necessary for the community to reengineer its processes from a Cold War to a global awareness focus. Emphasizing the crucial role of analysis, the commission noted:
The raison d’etre of the Intelligence Community is to provide accurate and meaningful information and insights to consumers in a form they can use at the time they need them. If intelligence fails to do that, it fails altogether. The expense and effort invested in collecting and processing the information have gone for naught.
The commission identified the KM challenges faced by large-scale intelligence analysis that encompasses global issues and serves a broad customer base.
The commission’s major observations provide insight into the emphasis on people- related (rather than technology-related) issues that must be addressed for intelligence to be valued by the policy and decision makers that consume intelligence:
- Build relationships. A concerted effort is required to build relationships between intelligence producers and the policymakers they serve. Producer-consumer relationships range from assignment of intelligence liaison officers with consumers (the closest relationship and greatest consumer satisfaction) to holding regular briefings, or simple producer-subscriber relationships for general broadcast intelligence. Across this range of relationships, four functions must be accomplished for intelligence to be useful:
- Analysts must understand the consumer’s level of knowledge and the issues they face.
- Intelligence producers must focus on issues of significance and make information available when needed, in a format appropriate to the unique consumer.
- Consumers must develop an understanding of what intelligence can and—equally important—cannot do.
- Both consumer and producer must be actively engaged in a dialogue with analysts to refine intelligence support to decision making.
- Increase and expand the scope of analytic expertise. The expertise of the individual analysts and the community of analysts must be maintained at the highest level possible. This expertise is in two areas: domain, or region of focus (e.g., nation, group, weapon systems, or economics), and analytic-synthetic tradecraft. Expertise development should include the use of outside experts, travel to countries of study, sponsor- ship of topical conferences, and other means (e.g., simulations and peer reviews).
- Enhance use of open sources. Open-source data (i.e., publicly available data in electronic and broadcast media, journals, periodicals, and commercial databases) should be used to complement (cue, provide con- text, and in some cases, validate) special, or closed, sources. The analyst must have command of all available information and the means to access and analyze both categories of data in complementary fashion.
- Make analysis available to users. Intelligence producers must increasingly apply dynamic, electronic distribution means to reach consumers for collaboration and distribution. The DoD Joint Deployable Intelligence Support System (JDISS) and IC Intelink were cited as early examples of networked intelligence collaboration and distribution systems.
- Enhance strategic estimates. The United States produces national intelligence estimates (NIEs) that provide authoritative statements and fore- cast judgments about the likely course of events in foreign countries and their implications for the United States. These estimates must be enhanced to provide timely, objective, and relevant data on a wider range of issues that threaten security.
- Broaden the analytic focus. As the national security threat envelope has broadened (beyond the narrower focus of the Cold War), a more open, collaborative environment is required to enable intelligence analysts to interact with policy departments, think tanks, and academia to analyze, debate, and assess these new world issues.
In the half decade since the commission recommendations were published, the United States has implemented many of the recommendations. Several examples of intelligence reengineering include:
- Producer-consumer relationships. The introduction of collaborative networks, tools, and soft-copy products has permitted less formal interaction and more frequent exchange between consumers and producers. This allows intelligence producers to better understand consumer needs and decision criteria. This has enabled the production of more focused, timely intelligence.
- Analytic expertise. Enhancements in analytic training and the increased use of computer-based analytic tools and even simulation are providing greater experience—and therefore expertise—to human analysts.
- Open source. Increased use of open-source information via commercial providers (e.g., Lexis NexisTM subscription clipping services to tailored topics) and the Internet has provided an effective source for obtaining background information. This enables special sources and methods to focus on validation of critical implications.
- Analysis availability. The use of networks continues to expand for both collaboration (between analysts and consumers as well as between analysts) and distribution. This collaboration was enabled by the intro- duction and expansion of the classified Internet (Intelink) that interconnects the IC [24].
- Broadened focus. The community has coordinated open panels to dis- cuss, debate, and collaboratively analyze and openly publish strategic perspectives of future security issues. One example is the “Global Trends 2015” report that resulted from a long-term collaboration with academia, the private sector, and topic area experts [25].
2.7 The Future of Intelligence
The two primary dimensions of future threats to national (and global) security include the source (from nation-state actors to no-state actors) and the threat-generating mechanism (continuous results of rational nation-state behaviors to discontinuities in complex world affairs). These threat changes and the contrast in intelligence are summarized in Table 2.4. Notice that these changes coincide with the transition from sensor-centric to network- and knowledge-centric approaches to intelligence introduced in Chapter 1.
intelligence must focus on knowledge creation in an enterprise environment that is prepared to rapidly reinvent itself to adapt to emergent threats.
3
Knowledge Management Processes
KM is the term adopted by the business community in the mid 1990s to describe a wide range of strategies, processes, and disciplines that formalize and integrate an enterprise’s approach to organizing and applying its knowledge assets. Some have wondered what is truly new about the concept of managing knowledge. Indeed, many pure knowledge-based organizations (insurance companies, consultancies, financial management firms, futures brokers, and of course, intelligence organizations) have long “managed” knowledge—and such management processes have been the core competency of the business.
The scope of knowledge required by intelligence organizations has increased in depth and breadth as commerce has networked global markets and world threats have diversified from a monolithic Cold War posture. The global reach of networked information, both open and closed sources, has produced a deluge of data—requiring computing support to help human analysts sort, locate, and combine specific data elements to provide rapid, accurate responses to complex problems. Finally, the formality of the KM field has grown significantly in the past decade—developing theories for valuing, auditing, and managing knowledge as an intellectual asset; strategies for creating, reusing, and leveraging the knowledge asset; processes for con- ducting collaborative transactions of knowledge among humans and machines; and network information technologies for enabling and accelerating these processes.
3.1 Knowledge and Its Management
In the first chapter, we introduced the growing importance of knowledge as the central resource for competition in both the nation-state and in business. Because of this, the importance of intelligence organizations providing strategic knowledge to public- and private-sector decision makers is paramount. We can summarize this importance of intelligence to the public or private enterprise in three assertions about knowledge.
First, knowledge has become the central asset or resource for competitive advantage. In the Tofflers’ third wave, knowledge displaces capital, labor, and natural resources as the principal reserve of the enterprise. This is true in wealth creation by businesses and in national security and the conduct of warfare for nation-states.
Second, it is asserted that the management of the knowledge resource is more complex than other resources. The valuation and auditing of knowledge is unlike physical labor or natural resources; knowledge is not measured by “head counts” or capital valuation of physical inventories, facilities, or raw materials (like stockpiles of iron ore, fields of cotton, or petroleum reserves). New methods of quantifying the abstract entity of knowledge—both in people and in explicit representations—are required. In order to accomplish this complex challenge, knowledge managers must develop means to capture, store, create, and exchange knowledge, while dealing with the sensitive security issues of knowing when to protect and when to share (the trade-off between the restrictive “need to know” and the collaborative “need to share”).
The third assertion about knowledge is that its management therefore requires a delicate coordination of people, processes, and supporting technologies to achieve the enterprise objectives of security, stability, and growth in a dynamic world:
- People. KM must deal with cultures and organizational structures that enable and reward the growth of knowledge through collaborative learning, reasoning, and problem solving.
- Processes. KM must also provide an environment for exchange, discovery, retention, use, and reuse of knowledge across the organization.
- Technologies. Finally, IT must be applied to enable the people and processes to leverage the intellectual asset of actionable knowledge.
Definitions of KM as a formal activity are as diverse as its practitioners (Table 3.1), but all have in common the following general characteristics:
KM is based on a strategy that accepts knowledge as the central resource to achieve business goals and that knowledge—in the minds of its people, embedded in processes, and in explicit representations in knowledge bases—must be regarded as an intellectual form of capital to be leveraged. Organizational values must be coupled with the growth of this capital.
KM involves a process that, like a supply chain, moves from raw materials (data) toward knowledge products. The process is involved in acquiring (data), sorting, filtering, indexing and organizing (information), reasoning (analyzing and synthesizing) to create knowledge, and finally disseminating that knowledge to users. But this supply chain is not a “stovepiped” process (a narrow, vertically integrated and compartmented chain); it horizontally integrates the organization, allowing collaboration across all areas of the enterprise where knowledge sharing provides benefits.
KM embraces a discipline and cultural values that accept the necessity for sharing purpose, values, and knowledge across the enterprise to leverage group diversity and perspectives to promote learning and intellectual problem solving. Collaboration, fully engaged communication and cognition, is required to network the full intellectual power of the enterprise.
The U.S. National Security Agency (NSA) has adopted the following “people-oriented” definition of KM to guide its own intelligence efforts:
Strategies and processes to create, identify, capture, organize and leverage vital skills, information and knowledge to enable people to best accomplish the organizational mission.7ryfcv
The DoD has further recognized that KM is the critical enabler for information superiority:
The ability to achieve and sustain information superiority depends, in large measure, upon the creation and maintenance of reusable knowledge bases; the ability to attract, train, and retain a highly skilled work force proficient in utilizing these knowledge bases; and the development of core business processes designed to capitalize upon these assets.
The processes by which abstract knowledge results in tangible effects can be examined as a net of influences that effect knowledge creation and decision making.
The flow of influences in the figure illustrates the essential contributions of shared knowledge.
- Dynamic knowledge. At the central core is a comprehensive and dynamic understanding of the complex (business or national security) situation that confronts the enterprise. This understanding accumulates over time to provide a breadth and depth of shared experience, or organizational memory.
- Critical and systems thinking. Situational understanding and accumulated experience enables dynamic modeling to provide forecasts from current situations—supporting the selection of adapting organizational goals. Comprehensive understanding (perception) and thorough evaluation of optional courses of actions (judgment) enhance decision making. As experience accumulates and situational knowledge is refined, critical explicit thinking and tacit sensemaking about current situations and the consequences of future actions is enhanced.
- Shared operating picture. Shared pictures of the current situation (common operating picture), past situations and outcomes (experience), and forecasts of future outcomes enable the analytic workforce to collaborate and self-synchronize in problem solving.
- Focused knowledge creation. Underlying these functions is a focused data and experience acquisition process that tracks and adapts as the business or security situation changes.
While Figure 3.1 maps the general influences of knowledge on goal setting, judgment, and decision making in an enterprise, an understanding of how knowledge influences a particular enterprise in a particular environment is necessary to develop a KM strategy. Such a strategy seeks to enhance organizational knowledge of these four basic areas as well as information security to protect the intellectual assets,
3.2 Tacit and Explicit Knowledge
In the first chapter, we offered a brief introduction to hierarchical taxonomy of data, information, and knowledge, but here we must refine our understanding of knowledge and its construct before we delve into the details of management processes.
In this chapter, we distinguish between the knowledge-creation processes within the knowledge-creating hierarchy (Figure 3.2). The hierarchy illustrates the distinctions we make, in common terminology, between explicit (represented and defined) processes and those that are implicit (or tacit; knowledge processes that are unconscious and not readily articulated).
3.2.1 Knowledge As Object
The most common understanding of knowledge is as an object—the accumulation of things perceived, discovered, or learned. From this perspective, data (raw measurements or observations), information (data organized, related, and placed in context), and knowledge (information explained and the underlying processes understood) are also objects. The KM field has adopted two basic distinctions in the categories of knowledge as object:
- Explicit knowledge. This is the better known form of knowledge that has been captured and codified in abstract human symbols (e.g., mathematics, logical propositions, and structured and natural language). It is tangible, external (to the human), and logical. This documented knowledge can be stored, repeated, and taught by books because it is impersonal and universal. It is the basis for logical reasoning and, most important of all, it enables knowledge to be communicated electronically and reasoning processes to be automated.
- Tacit knowledge. This is the intangible, internal, experiential, and intuitive knowledge that is undocumented and maintained in the human mind. It is a personal knowledge contained in human experience. Philosopher Michael Polanyi pioneered the description of such knowledge in the 1950s, considering the results of Gestalt psychology and the philosophic conflict between moral conscience and scientific skepticism. In The Tacit Dimension, he describes a kind of knowledge that we cannot tell. This tacit knowledge is characterized by intangible fac- tors such as perception, belief, values, skill, “gut” feel, intuition, “know-how,” or instinct; this knowledge is unconsciously internalized and cannot be explicitly described (or captured) without effort.
An understanding of the relationship between knowledge and mind is of particular interest to the intelligence discipline, because these analytic techniques will serve two purposes:
- Mind as knowledge manager. Understanding of the processes of exchanging tacit and explicit knowledge will, of course, aid the KM process itself. This understanding will enhance the efficient exchange of knowledge between mind and computer—between internal and external representations.
- Mind as intelligence target. Understanding of the complete human processes of reasoning (explicit logical thought) and sensemaking (tacit, emotional insight) will enable more representative modeling of adversarial thought processes. This is required to understand the human mind as an intelligence target—representing perceptions, beliefs, motives, and intentions
Previously, we have used the terms resource and asset to describe knowledge, but it is not only an object or a commodity to be managed. Knowledge can also be viewed as a dynamic, embedded in processes that lead to action. In the next section, we explore this complementary perspective of knowledge.
3.2.2 Knowledge As Process
Knowledge can also be viewed as the action, or dynamic process of creation, that proceeds from unstructured content to structured understanding. This perspective considers knowledge as action—as knowing. Because knowledge explains the basis for information, it relates static information to a dynamic reality. Knowing is uniquely tied to the creation of meaning.
Karl Weick introduced the term sensemaking to describe the tacit knowing process of retrospective rationality—the method by which individuals and organizations seek to rationally account for things by going back in time to structure events and explanations holistically. We do this, to “make sense” of reality, as we perceive it, and create a base of experience, shared meaning, and understanding.
To model and manage the knowing process of an organization requires attention to both of these aspects of knowledge—one perspective emphasizing cognition, the other emphasizing culture and context. The general knowing process includes four basic phases that can be described in process terms that apply to tacit and explicit knowledge, in human and computer terms, respectively.
- This process acquires knowledge by accumulating data through human observation and experience or technical sensing and measurement. The capture of e-mail discussion threads, point-of-sales transactions, or other business data, as well as digital imaging or signals analysis are but examples of the wide diversity of acquisition methods.
- Maintenance. Acquired explicit data is represented in a standard form, organized, and stored for subsequent analysis and application in digital databases. Tacit knowledge is stored by humans as experience, skill, or expertise, though it can be elicited and converted to explicit form in terms of accounts, stories (rich explanations), procedures, or explanations.
- Transformation. The conversion of data to knowledge and knowledge from one form to another is the creative stage of KM. This knowledge-creation stage involves more complex processes like internalization, intuition, and conceptualization (for internal tacit knowledge) and correlation and analytic-synthetic reasoning (for explicit knowledge). In the next subsection, this process is described in greater detail.
- Transfer. The distribution of acquired and created knowledge across the enterprise is the fourth phase. Tacit distribution includes the sharing of experiences, collaboration, stories, demonstrations, and hands-on training. Explicit knowledge is distributed by mathematical, graphical, and textual representations, from magazines and textbooks to electronic media.
the three phases of organizational knowing (focusing on culture) described by Davenport and Prusak in their text Working Knowledge [17]:
- Generation. Organizational networks generate knowledge by social processes of sharing, exploring, and creating tacit knowledge (stories, experiences, and concepts) and explicit knowledge (raw data, organized databases, and reports). But these networks must be properly organized for diversity of both experience and perspective and placed under appropriate stress (challenge) to perform. Dedicated cross- functional teams, appropriately supplemented by outside experts and provided a suitable challenge, are the incubators for organizational knowledge generation.
- Codification and coordination. Codification explicitly represents generated knowledge and the structure of that knowledge by a mapping process. The map (or ontology) of the organization’s knowledge allows individuals within the organization to locate experts (tacit knowledge holders), databases (of explicit knowledge), and tacit-explicit net- works. The coordination process models the dynamic flow of knowledge within the organization and allows the creation of narratives (stories) to exchange tacit knowledge across the organization.
- Transfer. Knowledge is transferred within the organization as people interact; this occurs as they are mentored, temporarily exchanged, transferred, or placed in cross-functional teams to experience new perspectives, challenges, or problem-solving approaches.
3.2.3 Knowledge Creation Model
Nonaka and Takeuchi describe four modes of conversion, derived from the possible exchanges between two knowledge types (Figure 3.5):
- Tacit to tacit—socialization. Through social interactions, individuals within the organization exchange experiences and mental models, transferring the know-how of skills and expertise. The primary form of transfer is narrative—storytelling—in which rich context is conveyed and subjective understanding is compared, “reexperienced,” and internalized. Classroom training, simulation, observation, mentoring, and on-the-job training (practice) build experience; moreover, these activities also build teams that develop shared experience, vision, and values. The socialization process also allows consumers and producers to share tacit knowledge about needs and capabilities, respectively.
- Tacit to explicit—externalization. The articulation and explicit codification of tacit knowledge moves it from the internal to external. This can be done by capturing narration in writing, and then moving to the construction of metaphors, analogies, and ultimately models. Externalization is the creative mode where experience and concept are expressed in explicit concepts—and the effort to express is in itself a creative act. (This mode is found in the creative phase of writing, invention, scientific discovery, and, for the intelligence analyst, hypothesis creation.)
- Explicit to explicit—combination. Once explicitly represented, different objects of knowledge can be characterized, indexed, correlated, and combined. This process can be performed by humans or computers and can take on many forms. Intelligence analysts compare multiple accounts, cable reports, and intelligence reports regarding a common subject to derive a combined analysis. Military surveillance systems combine (or fuse) observations from multiple sensors and HUMINT reports to derive aggregate force estimates. Market analysts search (mine) sales databases for patterns of behavior that indicate emerging purchasing trends. Business developers combine market analyses, research and development results, and cost analyses to create strategic plans. These examples illustrate the diversity of the combination processes that combine explicit knowledge.
- Explicit to tacit—internalization. Individuals and organizations internalize knowledge by hands-on experience in applying the results of combination. Combined knowledge is tested, evaluated, and results in new tacit experience. New skills and expertise are developed and integrated into the tacit knowledge of individuals and teams.
Nonaka and Takeuchi further showed how these four modes of conversion operate in an unending spiral sequence to create and transfer knowledge throughout the organization
Organizations that have redundancy of information (in people, processes, and databases) and diversity in their makeup (also in people, processes, and databases) will enhance the ability to move along the spiral. The modes of activity benefit from a diversity of people: socialization requires some who are stronger in dialogue to elicit tacit knowledge from the team; externalization requires others who are skilled in representing knowledge in explicit forms; and internalization benefits from those who experiment, test ideas, and learn from experience, with the new concepts or hypotheses arising from combination.
Organizations can also benefit from creative chaos—changes that punctuate states of organizational equilibrium. These states include static presumptions, entrenched mindsets, and established processes that may have lost validity in a changing environment. Rather than destabilizing the organization, the injection of appropriate chaos can bring new-perspective reflection, reassess- ment, and renewal of purpose. Such change can restart tacit-explicit knowledge exchange, where the equilibrium has brought it to a halt.
3.3 An Intelligence Use Case Spiral
We follow a distributed crisis intelligence cell, using networked collaboration tools, through one complete spiral cycle to illustrate the spiral. This case is deliberately chosen because it stresses the spiral (no face-to-face interaction by the necessarily distributed team, very short time to interact, the temporary nature of the team, and no common “organizational” membership), yet illustrates clearly the phases of tacit-explicit exchange and the practical insight into actual intelligence- analysis activities provided by the model.
3.3.1 The Situation
The crisis in small but strategic Kryptania emerged rapidly. Vital national inter- ests—security of U.S. citizens, U.S. companies and facilities, and the stability of the fledgling democratic state—were at stake. Subtle but cascading effects in the environment, economy, and political domains triggered the small political lib- eration front (PLF) to initiate overt acts of terrorism against U.S. citizens, facili- ties, and embassies in the region while seeking to overthrow the fledgling democratic government.
3.3.2 Socialization
Within 10 hours of the team formation, all members participate in an on-line SBU kickoff meeting (same-time, different-place teleconference collaboration) that introduces all members, describes the group’s intelligence charter and procedures, explains security policy, and details the use of the portal/collaboration workspace created for the team. The team leader briefs the current situation and the issues: areas of uncertainly, gaps in knowledge or collection, needs for information, and possible courses of events that must be better understood. The group is allowed time to exchange views and form their own subgroups on areas of contribution that each individual can bring to the problem. Individuals express concepts for new sources for collection and methods of analysis. In this phase, the dialogue of the team, even though not face to face, is invaluable in rapidly establishing trust and a shared vision for the critical task over the ensuing weeks of the crisis.
3.3.3 Externalization
The initial discussions lead to the creation of initial explicit models of the threat that are developed by various team members and posted on the portal for all to see
The team collaboratively reviews and refines these models by updating new versions (annotated by contributors) and suggesting new submodels (or linking these models into supermodels). This externalization process codifies the team’s knowledge (beliefs) and speculations (to be evaluated) about the threat. Once externalized, the team can apply the analytic tools on the portal to search for data, link evidence, and construct hypothesis structures. The process also allows the team to draw on support from resources outside the team to conduct supporting collections and searches of databases for evidence to affirm, refine, or refute the models.
3.3.4 Combination
The codified models become archetypes that represent current thinking—cur- rent prototype hypotheses formed by the group about the threat (who—their makeup; why—their perceptions, beliefs, intents, and timescales; what—their resources, constraints and limitations, capacity, feasible plans, alternative courses of action, vulnerabilities). This prototype-building process requires the group to structure its arguments about the hypotheses and combine evidence to support its claims. The explicit evidence models are combined into higher level explicit explanations of threat composition, capacity, and behavioral patterns.
Initial (tentative) intelligence products are forming in this phase, and the team begins to articulate these prototype products—resulting in alternative hypotheses and even recommended courses of action
3.3.5 Internalization
As the evidentiary and explanatory models are developed on the portal, the team members discuss (and argue) over the details, internally struggling with acceptance or rejection of the validity of the various hypotheses. Individual team members search for confirming or refuting evidence in their own areas of expertise and discuss the hypotheses with others on the team or colleagues in their domain of expertise (often expressing them in the form of stories or metaphors) to experience support or refutation. This process allows the members to further refine and develop internal belief and confidence in the predictive aspects of the models. As accumulating evidence over the ensuing days strengthens (or refutes) the hypotheses, the process continues to internalize those explanations that the team has developed that are most accurate; they also internalize confidence in the sources and collaborative processes that were most productive for this ramp-up phase of the crisis situation.
3.3.6 Socialization
As the group periodically reconvenes, the subject focuses away from “what we must do” to the evidentiary and explanatory models that have been produced. The dialogue turns from issues of startup processes to model-refinement processes. The group now socializes around a new level of the problem: Gaps in the models, new problems revealed by the models, and changes in the evolving crisis move the spiral toward new challenges to create knowledge about vulnerabilities in the PLF and supporting networks, specific locations of black propaganda creation and distribution, finances of certain funding organizations, and identification of specific operation cells within the Kryptanian government.
3.3.7 Summary
This example illustrates the emergent processes of knowledge creation over the several day ramp-up period of a distributed crisis intelligence team.
The full spiral moved from team members socializing to exchange the tacit knowledge of the situation toward the development of explicit representations of their tacit knowledge. These explicit models allowed other supporting resources to be applied (analysts external to the group and online analytic tools) to link further evidence to the models and structure arguments for (or against) the models. As the models developed, team members discussed, challenged, and internalized their understanding of the abstractions, developing confidence and hands-on experience as they tested them against emerging reports and discussed them with team members and colleagues. The confidence and internalized understanding then led to a drive for further dialogue—initializing a second cycle of the spiral.
3.4 Taxonomy of KM
Using the fundamental tacit-explicit distinctions, and the conversion processes of socialization, externalization, internalization, and combination, we can establish a helpful taxonomy of the processes, disciplines, and technologies of the broad KM field applied to the intelligence enterprise. A basic taxonomy that categorizes the breadth of the KM field can be developed by distinguishing three areas of distinct (though very related) activities:
- People. The foremost area of KM emphasis is on the development of intellectual capital by people and the application of that knowledge by those people. The principal knowledge-conversion process in this area is socialization, and the focus of improvement is on human operations, training, and human collaborative processes. The basis of collaboration is human networks, known as communities of practice—sharing purpose, values, and knowledge toward a common mission. The barriers that challenge this area of KM are cultural in nature.
- Processes. The second KM area focuses on human-computer interaction (HCI) and the processes of externalization and internalization. Tacit-explicit knowledge conversions have required the development of tacit-explicit representation aids in the form of information visuali- zation and analysis tools, thinking aids, and decision support systems. This area of KM focuses on the efficient networking of people and machine processes (such autonomous support processes are referred to as agents) to enable the shared reasoning between groups of people and their agents through computer networks. The barrier to achieving robustness in such KM processes is the difficulty of creating a shared context of knowledge among humans and machines.
- Processors. The third KM area is the technological development and implementation of computing networks and processes to enable explicit-explicit combination. Network infrastructures, components, and protocols for representing explicit knowledge are the subject of this fast-moving field. The focus of this technology area is networked computation, and the challenges to collaboration lie in the ability to sustain growth and interoperability of systems and protocols.
Because the KM field can also be described by the many domains of expertise (or disciplines of study and practice), we can also distinguish five distinct areas of focus that help describe the field. The first two disciplines view KM as a competence of people and emphasize making people knowledgeable:
- Knowledge strategists. Enterprise leaders, such as the chief knowledge officer (CKO), focus on the enterprise mission and values, defining value propositions that assign contributions of knowledge to value (i.e., financial or operational). These leaders develop business models to grow and sustain intellectual capital and to translate that capital into organizational values (e.g., financial growth or organizational performance). KM strategists develop, measure, and reengineer business processes to adapt to the external (business or world) environment.
- Knowledge culture developers. Knowledge culture development and sustainment is promoted by those who map organizational knowledge and then create training, learning, and sharing programs to enhance the socialization performance of the organization. This includes the cadre of people who make up the core competencies of the organization (e.g., intelligence analysis, intelligence operations, and collection management). In some organizations a chief learning officer (CLO) is designated this role to oversee enterprise human capital, just as the chief financial officer (CFO) manages (tangible) financial capital.
The next three disciplines view KM as an enterprise capability and emphasize building the infrastructure to make knowledge manageable:
- KM applications. Those who apply KM principles and processes to specific business applications create both processes and products (e.g., software application packages) to provide component or end-end serv- ices in a wide variety of areas listed in Table 3.10. Some commercial KM applications have been sufficiently modularized to allow them to be outsourced to application service providers (ASPs) [20] that “package” and provide KM services on a per-operation (transaction) basis. This allows some enterprises to focus internal KM resources on organizational tacit knowledge while outsourcing architecture, infra- structure, tools, and technology.
- Enterprise architecture. Architects of the enterprise integrate people, processes, and IT to implement the KM business model. The architecting process defines business use cases and process models to develop requirements for data warehouses, KM services, network infrastructures, and computation.
- KM technology and tools. Technologists and commercial vendors develop the hardware and software components that physically implement the enterprise. Table 3.10 provides only a brief summary of the key categories of technologies that make up this broad area that encompasses virtually all ITs.
3.5 Intelligence As Capital
We have described knowledge as a resource (or commodity) and as a process in previous sections. Another important perspective of both the resource and the process is that of the valuation of knowledge. The value (utility or usefulness) of knowledge is first and foremost quantified by its impact on the user in the real world.
the value of intelligence goes far beyond financial considerations in national and MI application. In these cases, the value of knowledge must be measured in its impact on national interests: the warning time to avert a crisis, the accuracy necessary to deliver a weapon, the completeness to back up a policy decision, or the evidential depth to support an organized criminal conviction. Knowledge, as an abstraction, has no intrinsic value—its value is measured by its impact in the real world.
In financial terms, the valuation of the intangible aspects of knowledge is referred to as capital—intellectual capital. These intangible resources include the personal knowledge, skills, processes, intellectual property, and relationships that can be leveraged to produce assets of equal or greater importance than other organizational resources (land, labor, and capital).
What is this capital value in our representative business? It is comprised of four intangible components:
- Customer capital. This is the value of established relationships with customers, such as trust and reputation for quality.
Intelligence tradecraft recognizes this form of capital in the form of credibility with consumers—“the ability to speak to an issue with sufficient authority to be believed and relied upon by the intended audience”
- Innovation capital. Innovation in the form of unique strategies, new concepts, processes, and products based on unique experience form this second category of capital. In intelligence, new and novel sources and methods for unique problems form this component of intellectual capital.
- Process capital. Methodologies and systems or infrastructure (also called structural capital) that are applied by the organization make up its process capital. The processes of collection sources and both collection and analytic methods form a large portion of the intelligence organization’s process (and innovation) capital; they are often fragile (once discovered, they may be forever lost) and are therefore carefully protected.
- Human capital. The people, individually and in virtual organizations, comprise the human capital of the organization. Their collective tacit knowledge—expressed as dedication, experience, skill, expertise, and insight—form this critical intangible resource.
O’Dell and Grayson have defined three fundamental categories of value propositions in If Only We Knew What We Know [23]:
- Operational excellence. These value propositions seek to boost revenue by reducing the cost of operations through increased operating efficiencies and productivity. These propositions are associated with business process reengineering (BPR), and even business transformation using electronic commerce methods to revolutionize the operational process. These efforts contribute operational value by raising performance in the operational value chain.
- Product-to-market excellence. The propositions value the reduction in the time to market from product inception to product launch. Efforts that achieve these values ensure that new ideas move to development and then to product by accelerating the product development process. This value emphasizes the transformation of the business, itself (as explained in Section 1.1).
- Customer intimacy. These values seek to increase customer loyalty, customer retention, and customer base expansion by increasing intimacy (understanding, access, trust, and service anticipation) with customers. Actions that accumulate and analyze customer data to reduce selling cost while increasing customer satisfaction contribute to this proposition.
For each value proposition, specific impact measures must be defined to quantify the degree to which the value is achieved. These measures quantify the benefits, and utility delivered to stakeholders. Using these measures, the value added by KM processes can be observed along the sequential processes in the business operation. This sequence of processes forms a value chain that adds value from raw materials to delivered product.
Different kinds of measures are recommended for organizations in transition from legacy business models. During periods of change, three phases are recognized [24]. In the first phase, users (i.e., consumers, collection managers, and analysts) must be convinced of the benefits of the new approach, and the measures include metrics as simple as the number of consumers taking training and beginning to use serv- ices. In the crossover phase, when users begin to transition to the systems, measurers change to usage metrics. Once the system approaches steady-state use, financial-benefit measures are applied. Numerous methods have been defined and applied to describe and quantify economic value, including:
- Economic value added (EVA) subtracts cost of capital invested from net operating profit;
- Portfolio management approaches treats IT projects as individual investments, computing risks, yields, and benefits for each component of the enterprise portfolio;
- Knowledge capital is an aggregate measure of management value added (by knowledge) divided by the price of capital [25];
- Intangible asset monitor (IAM) [26] computes value in four categories—tangible capital, intangible human competencies, intangible internal structure, and intangible external structure [27].
The four views of the BSC provide a means of “balancing” the measurement of the major causes and effects of organizational performance but also provide a framework for modeling the organization.
3.6 Intelligence Business Strategy and Models
The commercial community has explored a wide range of business models that apply KM (in the widest sense) to achieve key business objectives. These objectives include enhancing customer service to provide long-term customer satisfaction and retention, expanding access to customers (introducing new products and services, expanding to new markets), increasing efficiency in operations (reduced cost of operations), and introducing new network-based goods and services (eCommerce or eBusiness). All of these objectives can be described by value propositions that couple with business financial performance.
The strategies that leverage KM to achieve these objectives fall into two basic categories. The first emphasizes the use of analysis to understand the value chain from first customer contact to delivery. Understanding the value added to the customer by the transactions (as well as delivered goods and services) allows the producer to increase value to the customer. Values that may be added to intelligence consumers by KM include:
Service values. Greater value in services are provided to policymakers by anticipating their intelligence needs, earning greater user trust in accuracy and focus of estimates and warnings, and providing more timely delivery of intelligence. Service value is also increased as producers personalize (tailor) and adapt services to the consumer’s interests (needs) as they change.
Intelligence product values. The value of intelligence products is increased when greater value is “added” by improving accuracy, providing deeper and more robust rationale, focusing conclusions, and building increased consumer confidence (over time).
The second category of strategies (prompted by the eBusiness revolution) seeks to transform the value chain by the introduction of electronic transactions between the customer and retailer. These strategies use network-based advertising, ordering, and even delivery (for information services like banking, investment, and news) to reduce the “friction” of physical-world retailer-customer
These strategies introduce several benefits—all applicable to intelligence:
- Disintermediation. This is the elimination of intermediate processes and entities between the customer and producer to reduce transaction fric- tion. This friction adds cost and increases the difficulty for buyers to locate sellers (cost of advertising), for buyers to evaluate products (cost of travel and shopping), for buyers to purchase products (cost of sales) and for sellers to maintain local inventories (cost of delivery). The elimination of “middlemen” (e.g., wholesalers, distributors, and local retailers) in eRetailers such as Amazon.com has reduced transaction and intermediate costs and allowed direct transaction and delivery from producer to customer with only the eRetailer in between. The effect of disintermediation in intelligence is to give users greater and more immediate access to intelligence products (via networks such as the U.S. Intelink) and to analysis services via intelligence portals that span all sources of intelligence.
- Infomediation. The effect of disintermediation has introduced the role of the information broker (infomediary) between customer and seller, providing navigation services (e.g., shopping agents or auctioning and negotiating agents) that act on the behalf of customers [31]. Intelligence communities are moving toward greater cross-functional collection management and analysis, reducing the stovepiped organization of intelligence by collection disciplines (i.e., imagery, signals, and human sources). As this happens, the traditional analysis role requires a higher level of infomediation and greater automation because the analyst is expected (by consumers) to become a broker across a wider range of intelligence sources (including closed and open sources).
- Customer aggregation. The networking of customers to producers allows rapid analysis of customer actions (e.g., queries for information, browsing through catalogs of products, and purchasing decisions based on information). This analysis enables the producers to better understand customers, aggregate their behavior patterns, and react to (and perhaps anticipate) customer needs. Commercial businesses use these capabilities to measure individual customer patterns and mass market trends to more effectively personalize and target sales and new product developments. Intelligence producers likewise are enabled to analyze warfighter and policymaker needs and uses of intelligence to adapt and tailor products and services to changing security threats.
These value chain transformation strategies have produced a simple taxonomy to distinguish eBusiness models into four categories by the level of transaction between businesses and customers
- Business to business (B2B). The large volume of trade between businesses (e.g., suppliers and manufacturers) has been enhanced by network-based transactions (releases of specifications, requests for quotations, and bid responses) reducing the friction between suppliers and producers. High-volume manufacturing industries such as the auto- makers are implementing B2B models to increase competition among suppliers and reduce bid-quote-purchase transaction friction.
- 2. Business to customer (B2C). Direct networked outreach from producer to consumer has enabled the personal computer (e.g., Dell Computer) and book distribution (e.g., Amazon.com) industries to disintermediate local retailers and reach out on a global scale directly to customers. Similarly, intelligence products are now being delivered (pushed) to consumers on secure electronic networks, via subscription and express order services, analogous to the B2B model.
- Customer to business (C2B). Networks also allow customers to reach out to a wider range of businesses to gain greater competitive advantage in seeking products and services.
the introduction of secure intelligence networks and on-line intelligence product libraries (e.g., common operating picture and map and imagery libraries) allows consumers to pull intelligence from a broader range of sources. (This model enables even greater competition between source providers and provides a means of measuring some aspects of intelligence utility based on actual use of product types.)
- Customer to customer (C2C). The C2C model automates the mediation process between consumers, enabling consumers to locate those with similar purchasing-selling interests.
3.7 Intelligence Enterprise Architecture and Applications
Just like commercial businesses, intelligence enterprises:
- Measure and report to stakeholders the returns on investment. These returns are measured in terms of intelligence performance (i.e., knowledge provided, accuracy and timeliness of delivery, and completeness and sufficiency for decision making) and outcomes (i.e., effects of warnings provided, results of decisions based on knowledge delivered, and utility to set long-term policies).
- Service customers, the intelligence consumers. This is done by providing goods (intelligence products such as reports, warnings, analyses, and target folders) and services (directed collections and analyses or tailored portals on intelligence subjects pertinent to the consumers).
- Require intimate understanding of business operations and must adapt those operations to the changing threat environment, just as businesses must adapt to changing markets.
- Manage a supply chain that involves the anticipation of future needs of customers, the adjustment of the delivery of raw materials (intelligence collections), the production of custom products to a diverse customer base, and the delivery of products to customers just in time [33].
3.7.1 Customer Relationship Management
CRM processes that build and maintain customer loyalty focus on managing the relationship between provider and consumer. The short-term goal is customer satisfaction; the long-term goal is loyalty. Intelligence CRM seeks to provide intelligence content to consumers that anticipates their needs, focuses on the specific information that supports their decision making, and provides drill down to supporting rationale and data behind all conclusions. In order to accomplish this, the consumer-producer relationship must be fully described in models that include:
- Consumer needs and uses of intelligence—applications of intelligence for decision making, key areas of customer uncertainty and lack of knowledge, and specific impact of intelligence on the consumer’s decision making;
- Consumer transactions—the specific actions that occur between the enterprise and intelligence consumers, including urgent requests, subscriptions (standing orders) for information, incremental and final report deliveries, requests for clarifications, and issuances of alerts.
CRM offers the potential to personalize intelligence delivery to individual decision makers while tracking their changing interests as they browse subject offerings and issue requests through their own custom portals.
3.7.2 Supply Chain Management
The SCM function monitors and controls the flow of the supply chain, providing internal control of planning, scheduling, inventory control, processing, and delivery.
SCM is the core of B2B business models, seeking to integrate front-end suppliers into an extended supply chain that optimizes the entire production process to slash inventory levels, improve on-time delivery, and reduce the order-to-delivery (and payment) cycle time. In addition to throughput efficiency, the B2B models seek to aggregate orders to leverage the supply chain to gain greater purchasing power, translating larger orders to reduced prices. The key impact measures sought by SCM implementations include:
- Cash-to-cash cycle time (time from order placement to delivery/ payment);
- Delivery performance (percentage of orders fulfilled on or before request date);
- Initial fill rate (percentage of orders shipped in supplier’s first ship- ment);
- Initial order lead time (supplier response time to fulfill order);
- On-time receipt performance (percentage of supplier orders received on time).
Like the commercial manufacturer, the intelligence enterprise operates a supply chain that “manufactures” all-source intelligence products from raw sources of intelligence data and relies on single-source suppliers (i.e., imagery, signals, or human reports).
3.7.3 Business Intelligence
The BI function provides all levels of the organization with relevant information on internal operations and the external business environment (via marketing) to be exploited (analyzed and applied) to gain a competitive advantage. The BI function serves to provide strategic insight into overall enterprise operations based on ready access to operating data.
The emphasis of BI is on explicit data capture, storage, and analysis; through the 1990s, BI was the predominant driver for the implementation of corporate data warehouses, and the development of online analytic processing (OLAP) tools. (BI preceded KM concepts, and the subsequent introduction of broader KM concepts added the complementary need for capture and analysis of tacit and explicit knowledge throughout the enterprise.)
The intelligence BI function should collect and analyze real- time workflow data to provide answers to questions such as:
- What are the relative volumes of requests (for intelligence) by type?
- What is the “cost” of each category of intelligence product?
- What are the relative transaction costs of each stage in the supply chain?
- What are the trends in usage (by consumers) of all forms of intelligence over the past 12 months? Over the past 6 months? Over the past week?
- Which single sources of incoming intelligence (e.g., SIGINT, IMINT, and MASINT) have greatest utility in all-source products, by product category?
Like their commercial counterparts, the intelligence BI function should not only track the operational flows, they should also track the history of operational decisions—and their effects.
Both operational and decision-making data should be able to be conveniently navigated and analyzed to provide timely operational insight to senior leadership who often ask the question, “What is the cost of a pound of intelligence?”
3.8 Summary
KM provides a strategy and organizational discipline for integrating people, processes, and IT into an effective enterprise.
as noted by Tom Davenport, a leading observer of the discipline:
The first generation of knowledge management within enterprises emphasized the “supply side” of knowledge: acquisition, storage, and dissemination of business operations and customer data. In this phase knowledge was treated much like physical resources and implementation approaches focused on building “warehouses” and “channels” for supply processing and distribution. This phase paid great attention to systems, technology and infrastructure; the focus was on acquiring, accumulating and distributing explicit knowledge in the enterprise [35].
Second generation KM emphasis has turned attention to the demand side of the knowledge economy—seeking to identify value in the collected data to allow the enterprise to add value from the knowledge base, enhance the knowledge spiral, and accelerate innovation. This generation has brought more focus to people (the organization) and the value of tacit knowledge; the issues of sustainable knowledge creation and dissipation throughout the organization are emphasized in this phase. The attention in this generation has moved from understanding knowledge systems to understanding knowledge workers. The third generation to come may be that of KM innovation, in which the knowledge process is viewed as a complete life cycle within the organization, and the emphasis will turn to revolutionizing the organization and reducing the knowledge cycle time to adapt to an ever-changing world environment
4
The Knowledge-Based Intelligence Organization
National intelligence organizations following World War II were characterized by compartmentalization (insulated specialization for security purposes) that required individual learning, critical analytic thinking, and problem solving by small, specialized teams working in parallel (stovepipes or silos). These stovepipes were organized under hierarchical organizations that exercised central control. The approach was appropriate for the centralized organizations and bipolar security problems of the relatively static Cold War, but the global breadth and rapid dynamics of twenty-first century intelligence problems require more agile networked organizations that apply organization-wide collaboration to replace the compartmentalization of the past. Founded on the virtues of integrity and trust, the disciplines of organizational collaboration, learning, and problem solving must be developed to support distributed intelligence collection, analysis, and production.
This chapter focuses on the most critical factor in organizational knowl- edge creation—the people, their values, and organizational disciplines. The chapter is structured to proceed from foundational virtues, structures, and com- munities of practice (Section 4.1) to the four organizational disciplines that sup- port the knowledge creation process: learning, collaboration, problem solving, and best practices—called intelligence tradecraft.
the people perspective of KM presented in this chapter can be contrasted with the process and technology perspectives (Table 4.1) five ways:
- Enterprise focus. The focus is on the values, virtues, and mission shared by the people in the organization.
- Knowledge transaction. Socialization, the sharing of tacit knowledge by methods such as story and dialogue, is the essential mode of transac- tion between people for collective learning, or collaboration to solve problems.
- The basis for human collaboration lies in shared pur- pose, values, and a common trust.
- A culture of trust develops communities that share their best practices and experiences; collaborative problem solving enables the growth of the trusting culture.
- The greatest barrier to collaboration is the inability of an organization’s culture to transform and embrace the sharing of values, virtues, and disciplines.
The numerous implementation failures of early-generation KM enterprises have most often occurred because organizations have not embraced the new business models introduced, nor have they used the new systems to collaborate. As a result, these KM implementations have failed to deliver the intellectual capital promised. These cases were generally not failures of process, technology, or infrastructure; rather, they were failures of organizational culture change to embrace the new organizational model. In particular, they failed to address the cultural barriers to organizational knowledge sharing, learning, and problem solving.
Numerous texts have examined these implementation challenges, and all have emphasized that organizational transformation must precede KM system implementations.
4.1 Virtues and Disciplines of the Knowledge-Based Organization
At the core of an agile knowledge-based intelligence organization is the ability to sustain the creation of organizational knowledge through learning and collaboration. Underlying effective collaboration are values and virtues that are shared by all. The U.S. IC, recognizing the need for such agility as its threat environment changes, has adopted knowledge-based organizational goals as the first two of five objectives in its Strategic Intent:
- Unify the community through collaborative processes. This includes the implementation of training and business processes to develop an inter-agency collaborative culture and the deployment of supporting technologies.
- Invest in people and knowledge. This area includes the assessment of customer needs and the conduct of events (training, exercises, experiments, and conferences/seminars) to develop communities of practice and build expertise in the staff to meet those needs. Supporting infrastructure developments include the integration of collaborative networks and shared knowledge bases.
Clearly identified organizational propositions of values and virtues (e.g., integrity and trust) shared by all enable knowledge sharing—and form the basis for organizational learning, collaboration, problem solving, and best-practices (intelligence tradecraft) development introduced in this chapter. This is a necessary precedent before KM infrastructure and technology is introduced to the organization. The intensely human values, virtues, and disciplines introduced in the following sections are essential and foundational to building an intelligence organization whose business processes are based on the value of shared knowledge.
4.1.1 Establishing Organizational Values and Virtues
The foundation of all organizational discipline (ordered, self-controlled, and structured behavior) is a common purpose and set of values shared by all. For an organization to pursue a common purpose, the individual members must conform to a common standard and a common set of ideals for group conduct.
The knowledge-based intelligence organization is a society that requires virtuous behavior of its members to enable collaboration. Dorothy Leonard-Barton, in Wellsprings of Knowledge, distinguishes two categories of values: those that relate to basic human nature and those that relate to performance of the task. In the first category are big V values (also called moral virtues) that include basic human traits such as personal integrity (consistency, honesty, and reliability), truthfulness, and trustworthiness. For the knowledge worker’s task, the second category (of little v values) includes those values long sought by philosophers to arrive at knowledge or justify true belief. Some epistemologies define intellectual virtue as the foundation of knowledge: Knowledge is a state of belief arising out of intellectual virtue. Intellectual virtues include organizational conformity to a standard of right conduct in the exchange of ideas, in reasoning and in judgment.
Organizational integrity is dependent upon the individual integrity of all contributor—as participants cooperate and collaborate around a central purpose, the virtue of trust (built upon shared trust- worthiness of individuals) opens the doors of sharing and exchange. Essential to this process is the development of networks of conversations that are built on communication transactions (e.g., assertions, declarations, queries, or offers) that are ultimately based in personal commitments. Ultimately, the virtue of organizational wisdom—seeking the highest goal by the best means—must be embraced by the entire organization recognizing a common purpose.
Trust and cooperative knowledge sharing must also be complemented by an objective openness. Groups that place consensus over objectivity become subject to certain dangerous decision-making errors.
4.1.2 Mapping the Structures of Organizational Knowledge
Every organization has a structure and flow of knowledge—a knowledge environment or ecology (emphasizing the self-organizing and balancing characteristics of organizational knowledge networks). The overall process of studying and characterizing this environment is referred to as mapping—explicitly rep- resenting the network of nodes (competencies) and links (relationships, knowledge flow paths) within the organization. The fundamental role of KM organizational analysis is the mapping of knowledge within an existing organization.
the knowledge mapping identifies the intangible tacit assets of the organization. The mapping process is conducted by a variety of means: passive observation (where the analyst works within the community), active interviewing, formal questionnaires, and analysis. As an ethnographic research activity, the mapping analyst seeks to understand the unspoken, informal flows and sources of knowledge in the day-to-day operations of the organization. The five stages of mapping (Figure 4.1) must be conducted in partnership with the owners, users, and KM implementers.
The first phase is the definition of the formal organization chart—the for- mal flows of authority, command, reports, intranet collaboration, and information systems reporting. In this phase, the boundaries, or focus of mapping interest is established. The second phase audits (identifies, enumerates, and quantifies as appropriate) the following characteristics of the organization:
- Knowledge sources—the people and systems that produce and articulate knowledge in the form of conversation, developed skills, reports, implemented (but perhaps not documented) processes, and databases.
- Knowledge flowpaths—the flows of knowledge, tacit and explicit, for- mal and informal. These paths can be identified by analyzing the transactions between people and systems; the participants in the trans- actions provide insight into the organizational network structure by which knowledge is created, stored, and applied. The analysis must distinguish between seekers and providers of knowledge and their relationships (e.g., trust, shared understanding, or cultural compatibility) and mutual benefits in the transaction.
- Boundaries and constraints—the boundaries and barriers that control, guide, or constrict the creation and flow of knowledge. These may include cultural, political (policy), personal, or electronic system characteristics or incompatibilities.
- Knowledge repositories—the means of maintaining organizational knowledge, including tacit repositories (e.g., communities of experts that share experience about a common practice) and explicit storage (e.g., legacy hardcopy reports in library holdings, databases, or data warehouses).
Once audited, the audit data is organized in the third phase by clustering the categories of knowledge, nodes (sources and sinks), and links unique to the organization. The structure of this organization, usually a table or a spreadsheet, provides insight into the categories of knowledge, transactions, and flow paths; it provides a format to review with organization members to convey initial results, make corrections, and refine the audit. This phase also provides the foundation for quantifying the intellectual capital of the organization, and the audit categories should follow the categories of the intellectual capital accounting method adopted.
The fourth phase, mapping, transforms the organized data into a structure (often, but not necessarily, graphical) that explicitly identifies the current knowledge network. Explicit and tacit knowledge flows and repositories are distinguished, as well as the social networks that support them. This process of visualizing the structure may also identify clusters of expertise, gaps in the flows, chokepoints, as well as areas of best (and worst) practices within the network.
Once the organization’s current structure is understood, the structure can be compared to similar structures in other organizations by benchmarking in the final phase. Benchmarking is the process of identifying, learning, and adapting outstanding practices and processes from any organization, anywhere in the world, to help an organization improve its performance. Benchmarking gathers the tacit knowledge—the know-how, judgments, and enablers—that explicit knowledge often misses. This process allows the exchange of quantitative performance data and qualitative best-practice knowledge to be shared and com- pared with similar organizations to explore areas for potential improvement and potential risks.
Because the repository provides a pointer to the originating authors, it also provides critical pointers to people, or a directory that identifies people within the agency with experience and expertise by subject
4.1.3 Identifying Communities of Organizational Practice
A critical result of any mapping analysis is the identification of the clusters of individuals who constitute formal and informal groups that create, share, and maintain tacit knowledge on subjects of common interest.
The functional workgroup benefits from stability, established responsibilities, processes and storage, and high potential for sharing. Functional workgroups provide the high-volume knowledge production of the organization but lack the agility to respond to projects and crises.
Cross-functional project teams are shorter term project groups that can be formed rapidly (and dismissed just as rapidly) to solve special intelligence problems, maintain special surveillance watches, prepare for threats, or respond to crises. These groups include individuals from all appropriate functional disciplines—with the diversity often characteristic of the makeup of the larger organization, but on a small scale—with reach back to expertise in functional departments.
M researchers have recognized that such organized commu- nities provide a significant contribution to organizational learning by providing a forum for:
- Sharing current problems and issues;
- Capturing tacit experience and building repositories of best practices;
- Linking individuals with similar problems, knowledge, and experience;
- Mentoring new entrants to the community and other interested parties.
Because participation in communities of practice is based on individual interest, not organizational assignment, these communities may extend beyond the duration of temporary assignments and cut across organizational boundaries.
The activities of working, learning, and innovating have traditionally been treated as independent (and conflicting) activities performed in the office, in the classroom, and in the lab. However, studies by John Seely Brown, chief scientist of Xerox PARC, have indicated that once these activities are unified in communities of practice, they have the potential to significantly enhance knowledge transfer and creation.
4.1.4 Initiating KM Projects
The knowledge mapping and benchmarking process must precede implementation of KM initiatives, forming the understanding of current competencies and processes and the baseline for measuring any benefits of change. KM implementation plans within intelligence organizations generally consider four components, framed by the kind of knowledge being addressed and the areas of investment in KM initiatives:
- Organizational competencies. The first area includes assessment of workforce competencies and forms the basis of an intellectual capital audit of human capital. This area also includes the capture of best practices (the intelligence business processes, or tradecraft) and the development of core competencies through training and education. This assessment forms the basis of intellectual capital audit.
- Social collaboration. Initiatives in this area enforce established face-to-face communities of practice and develop new communities. These activities enhance the socialization process through meetings and media (e.g., newsletters, reports, and directories).
- KM networks. Infrastructure initiatives implement networks (e.g., corporate intranets) and processes (e.g., databases, groupware, applications, and analytic tools) to provide for the capture and exchange of explicit knowledge.
- Virtual collaboration. The emphasis in this area is applying technology to create connectivity among and between communities of practice. Intranets and collaboration groupware (discussed in Section 4.3.2) enable collaboration at different times and places for virtual teams—and provide the ability to identify and introduce communities with similar interests that may be unaware of each other.
4.1.5 Communicating Tacit Knowledge by Storytelling
The KM community has recognized the strength of narrative communication—dialogue and storytelling—to communicate the values, emotion (feelings, passion), and sense of immersed experience that makeup personalized, tacit knowledge.
The introduction of KM initiatives can bring significant organizational change because it may require cultural transitions in several areas:
- Changes in purpose, values, and collaborative virtues;
- Construction of new social networks of trust and communication;
- Organizational structure changes (networks replace hierarchies);
- Business process agility, resulting a new culture of continual change (training to adopt new procedures and to create new products).
All of these changes require participation by the workforce and the communication of tacit knowledge across the organization.
Storytelling provides a complement to abstract, analytical thinking and communication, allowing humans to share experience, insight, and issues (e.g., unarticulated concerns about evidence expressed as “negative feelings,” or general “impressions” about repeated events not yet explicitly defined as threat patterns).
The organic school of KM that applies storytelling to cultural transformation emphasizes a human behavioral approach to organizational socialization, accepting the organization as a complex ecology that may be changed in a large way by small effects.
These effects include the use of a powerful, effective story that communicates in a way that spreads credible tacit knowledge across the entire organization.
This school classifies tacit knowledge into artifacts, skills, heuristics, experience, and natural talents (the so-called ASHEN classification of tacit knowledge) and categorizes an organizations’ tacit knowledge in these classes to understand the flow within informal communities.
Nurturing informal sharing within secure communities of practice and distinguishing such sharing from formal sharing (e.g., shared data, best practices, or eLearning) enables the rich exchange of tacit knowledge when creative ideas are fragile and emergent.
4.2 Organizational Learning
Senge asserted that the fundamental distinction between traditional controlling organizations and adaptive self-learning organizations are five key disciplines including both virtues (commitment to personal and team learning, vision shar- ing, and organizational trust) and skills (developing holistic thinking, team learning, and tacit mental model sharing). Senge’s core disciplines, moving from the individual to organizational disciplines, included:
Personal mastery. Individuals must be committed to lifelong learning toward the end of personal and organization growth. The desire to learn must be to seek a clarification of one’s personal vision and role within the organization.
Systems thinking. Senge emphasized holistic thinking, the approach for high-level study of life situations as complex systems. An element of learning is the ability to study interrelationships within complex dynamic systems and explore and learn to recognize high-level patterns of emergent behavior.
Mental models. Senge recognized the importance of tacit knowledge (mental, rather than explicit, models) and its communication through the process of socialization. The learning organization builds shared mental models by sharing tacit knowledge in the storytelling process and the planning process. Senge emphasized planning as a tacit- knowledge sharing process that causes individuals to envision, articulate, and share solutions—creating a common understanding of goals, issues, alternatives, and solutions.
Shared vision. The organization that shares a collective aspiration must learn to link together personal visions without conflicts or competition, creating a shared commitment to a common organizational goal set.
Team learning. Finally, a learning organization acknowledges and understands the diversity of its makeup—and adapts its behaviors, pat- terns of interaction, and dialogue to enable growth in personal and organizational knowledge.
It is important, here, to distinguish the kind of transformational learning that Senge was referring to (which brings cultural change across an entire organization), from the smaller scale group learning that takes place when an intelligence team or cell conducts a long-term study or must rapidly “get up to speed” on a new subject or crisis.
4.2.1 Defining and Measuring Learning
The process of group learning and personal mastery requires the development of both reasoning and emotional skills. The level of learning achievement can be assessed by the degree to which those skills have been acquired.
The taxonomy of cognitive and affective skills can be related to explicit and tacit knowledge categories, respectively, to provide a helpful scale for measuring the level of knowledge achieved by an individual or group on a particular subject.
4.2.2 Organizational Knowledge Maturity Measurement
The goal of organizational learning is the development of maturity at the organizational level—a measure of the state of an organization’s knowledge about its domain of operations and its ability to continuously apply that knowledge to increase corporate value to achieve business goals.
Carnegie-Mellon University Software Engineering Institute has defined a five-level People Capability Maturity Model® (P-CMM ®) that distinguishes five levels of organizational maturity, which can be measured to assess and quantify the maturity of the workforce and its organizational KM performance. The P-CMM® framework can be applied, for example, to an intelligence organization’s analytic unit to measure current maturity and develop strategy to increase to higher levels of performance. The levels are successive plateaus of practice, each building on the preceding foundation.
4.2.3 Learning Modes
4.2.3.1 Informal Learning
We gain experience by informal modes of learning on the job alone, with men- tors, team members, or while mentoring others. The methods of informal learning are as broad as the methods of exchanging knowledge introduced in the last chapter. But the essence of the learning organization is the ability to translate what has been learned into changed organizational behavior. David Garvin has identified five fundamental organizational methodologies that are essential to implementing the feedback from learning to change; all have direct application in an intelligence organization.
- Systematic problem solving. Organizations require a clearly defined methodology for describing and solving problems, and then for implementing the solutions across the organization. Methods for acquiring and analyzing data, synthesizing hypothesis, and testing new ideas must be understood by all to permit collaborative problem solving. The process must also allow for the communication of lessons learned and best practices developed (the intelligence tradecraft) across the organization.
- Experimentation. As the external environment changes, the organization must be enabled to explore changes in the intelligence process. This is done by conducting experiments that take excursions from the normal processes to attack new problems and evaluate alternative tools and methods, data sources, or technologies. A formal policy to encourage experimentation, with the acknowledgment that some experiments will fail, allows new ideas to be tested, adapted, and adopted in the normal course of business, not as special exceptions. Experimentation can be performed within ongoing programs (e.g., use of new analytic tools by an intelligence cell) or in demonstration programs dedicated to exploring entirely new ways of conducting analysis (e.g., the creation of a dedicated Web-based pilot project independent of normal operations and dedicated to a particular intelligence subject domain).
- Internal experience. As collaborating teams solve a diversity of intelligence problems, experimenting with new sources and methods, the lessons that are learned must be exchanged and applied across the organization. This process of explicitly codifying lessons learned and making them widely available for others to adopt seems trivial, but in practice requires significant organizational discipline. One of the great values of communities of common practice is their informal exchange of lessons learned; organizations need such communities and must support formal methods that reach beyond these communities. Learning organizations take the time to elicit the lessons from project teams and explicitly record (index and store) them for access and application across the organization. Such databases allow users to locate teams with similar problems and lessons learned from experimentation, such as approaches that succeeded and failed, expected performance levels, and best data sources and methods.
- External sources of comparison. While the lessons learned just described applied to self learning, intelligence organizations must look to external sources (in the commercial world, academia, and other cooperating intelligence organizations) to gain different perspectives and experiences not possible within their own organizations. A wide variety of methods can be employed to secure the knowledge from external perspectives, such as making acquisitions (in the business world), establishing strategic relationships, the use of consultants, establishing consortia. The process of sharing, then critically comparing qualitative and quantitative data about processes and performance across organizations (or units within a large organization), enables leaders and process owners to objectively review the relative effectiveness of alter- native approaches. Benchmarking is the process of improving performance by continuously identifying, understanding, and adapting outstanding practices and processes found inside and outside the organization [23]. The benchmarking process is an analytic process that requires compared processes to be modeled, quantitatively measured, deeply understood, and objectively evaluated. The insight gained is an understanding of how best performance is achieved; the knowledge is then leveraged to predict the impact of improvements on over- all organizational performance.
- Transferring knowledge. Finally, an intelligence organization must develop the means to transfer people (tacit transfer of skills, experience, and passion by rotation, mentoring, and integrating process teams) and processes (explicit transfer of data, information, business processes on networks) within the organization. In Working Knowledge [24], Davenport and Prusak point out that spontaneous, unstructured knowledge exchange (e.g., discussions at the water cooler, exchanges among informal communities of interest, and discussions at periodic knowledge fairs) is vital to an organization’s success, and the organization must adopt strategies to encourage such sharing.
4.2.3.2 Formal Learning
In addition to informal learning, formal modes provide the classical introduc- tion to subject-matter knowledge.
Information technologies have enabled four distinct learning modes that are defined by distinguishing both the time and space of interaction between the learner and the instructor
- Residential learning (RL). Traditional residential learning places the students and instructor in the physical classroom at the same time and place. This proximity allows direct interaction between the student and instructor and allows the instructor to tailor the material to the students.
- Distance learning remote (DL-remote). Remote distance learning pro- vides live transmission of the instruction to multiple, distributed locations. The mode effectively extends the classroom across space to reach a wider student audience. Two-way audio and video can permit limited interaction between extended classrooms and the instructor.
- Distance learning canned (DL-canned). This mode simply packages (or cans) the instruction in some media for later presentation at the student’s convenience (e.g., traditional hardcopy texts, recorded audio or video, or softcopy materials on compact discs) DL-canned materials include computer-based training courseware that has built-in features to interact with the student to test comprehension, adaptively present material to meet a student’s learning style, and link to supplementary materials to the Internet.
- Distance learning collaborative (DL-collaborative). The collaborative mode of learning (often described as e-learning) integrates canned material while allowing on-line asynchronous interaction between the student and the instructor (e.g., via e-mail, chat, or videoconference). Collaboration may also occur between the student and software agents (personal coaches) that monitor progress, offer feedback, and recommend effective paths to on-line knowledge.
4.3 Organizational Collaboration
The knowledge-creation process of socialization occurs as communities (or teams) of people collaborate (commit to communicate, share, and diffuse knowledge) to achieve a common purpose.
Collaboration is a stronger term than cooperation because participants are formed around and committed to a com- mon purpose, and all participate in shared activity to achieve the end. If a problem is parsed into independent pieces (e.g., financial analysis, technology analysis, and political analysis), cooperation may be necessary—but not collabo- ration. At the heart of collaboration is intimate participation by all in the creation of the whole—not in cooperating to merely contribute individual parts to the whole.
Collaboration is widely believed to have the potential to perform a wide range of functions together:
- Coordinate tasking and workflow to meet shared goals;
- Share information, beliefs, and concepts;
- Perform cooperative problem-solving analysis and synthesis;
- Perform cooperative decision making;
- Author team reports of decisions and rationale.
This process of collaboration requires a team (two or more) of individuals that shares a common purpose, enjoys mutual respect and trust, and has an established process to allow the collaboration process to take place. Four levels (or degrees) of intelligence collaboration can be distinguished, moving toward increasing degrees of interaction and dependence among team members
Sociologists have studied the sequence of collaborative groups as they move from inception to decision commitment. Decision emergence theory (DET) defines four stages of collaborative decision making within an individual group: orientation of all members to a common perspective; conflict, during which alternatives are compared and competed; emergence of collaborative alternatives; and finally reinforcement, when members develop consensus and commitment to the group decisions.
4.3.1 Collaborative Culture
First among the means to achieve collaboration is the creation of a collaborating culture—a culture that shares the belief that collaboration (as opposed to competition or other models) is the best approach to achieve a shared goal and that shares a commitment to collaborate to achieve organizational goals.
The collaborative culture must also recognize that teams are heterogeneous in nature. Team members have different tacit (experience, personality style) and cognitive (reasoning style) preferences that influence their unique approach to participating in the collaborative process.
The mix of personalities within a team must be acknowledged and rules of collaborative engagement (and even groupware) must be adapted to allow each member to contribute within the constraints and strengths of their individual styles.
Collaboration facilitators may use Myers-Brigg or other categorization schemes to analyze a particular team’s structure to assess the team’s strengths, weaknesses and overall balance
4.3.2 Collaborative Environments
Collaborative environments describe the physical, temporal, and functional setting within which organizations interact.
4.3.3 Collaborative Intelligence Workflow
The representative team includes:
Intelligence consumer. The State Department personnel requesting the analysis define high-level requirements and are the ultimate customers for the intelligence product. They specify what information is needed: the scope or breadth of coverage, the level of depth, the accuracy required, and the timeframe necessary for policy making.
All-source analytic cell. The all-source analysis cell, which may be a dis- tributed virtual team across several different organizations, has the responsibility to produce the intelligence product and certify its accuracy.
Single-source analysts. Open-source and technical-source analysts (e.g., imagery, signals, or MASINT) are specialists that analyze the raw data collected as a result of special tasking; they deliver reports to the all- source team and certify the conclusions of special analysis.
Collection managers. The collection managers translate all-source requests for essential information (e.g., surveillance of shipping lines, identification of organizations, or financial data) into specific collection tasks (e.g., schedules, collection parameters, and coordination between different sources). They provide the all-source team with a status of their ability to satisfy the team’s requests.
4.3.3.3 The Collaboration Paths
- Problem statement.
Interacting with the all-source analytic leader (LDR)—and all-source analysts on the analytic team—the problem is articulated in terms of scope (e.g., area of world, focus nations, and expected depth and accuracy of estimates), needs (e.g., specific questions that must be answered and pol- icy issues) urgency (e.g., time to first results and final products), and expected format of results (e.g., product as emergent results portal or softcopy document).
- Problem refinement. The analytic leader (LDR) frames the problem with an explicit description of the consumer requirements and intelligence reporting needs. This description, once approved by the consumer, forms the terms of reference for the activity. The problem statement-refinement loop may be iterated as the situation changes or as intelligence reveals new issues to be studied.
- Information requests to collection tasking. Based on the requirements, the analytic team decomposes the problem to deduce specific elements of information needed to model and understand the level of trafficking. (The decomposition process was described earlier in Section 2.4.) The LDR provides these intelligence data requirements to the collec- tion manger (CM) to prepare a collection plan. This planning requires the translation of information needs to a coordinated set of data- collection tasks for humans and technical collection systems. The CM prepares a collection plan that traces planned collection data and means to the analytic team’s information requirements.
- Collection refinement. The collection plan is fed back to the LDR to allow the analytic team to verify the completeness and sufficiency of the plan—and to allow a review of any constraints (e.g., limits to coverage, depth, or specificity) or the availability of previously collected relevant data. The information request–collection planning and refinement loop iterates as the situation changes and as the intelligence analysis proceeds. The value of different sources, the benefits of coordinated collection, and other factors are learned by the analytic team as the analysis proceeds, causing adjustments to the collection plan to satisfy information needs.
- Cross cueing. The single-source analysts acquire data by searching exist- ing archived data and open sources and by receiving data produced by special collections tasked by the CM. Single-source analysts perform source-unique analysis (e.g., imagery analysis; open-source foreign news report, broadcast translation, and analysis; and human report analysis) As the single-source analysts gain an understanding of the timing of event data, and the relationships between data observed across the two domains, the single-source analysts share these temporal and functional relationships. The cross-cueing collaboration includes one analyst cueing the other to search for corroborating evidence in another domain; one analyst cueing the other to a possible correlated event; or both analysts recommending tasking for the CM to coordinate a special collection to obtain time or functionally correlated data on a specific target. It is important to note that this cross-cueing collaboration, shown here at the single-source analysis level function is also performed within the all-source analysis unit (8), where more subtle cross-source relations may be identified.
- Single-source analysis reporting. Single-source analysts report the interim results of analysis to the all-source team, describing the emerging picture of the trafficking networks as well as gaps in information. This path provides the all-source team with an awareness of the progress and contribution of collections, and the added value of the analysis that is delivering an emerging trafficking picture.
- Single-source analysis refinement. The all-source team can provide direction for the single-source analysts to focus (“Look into that organization in greater depth”), broaden (“Check out the neighboring countries for similar patterns”), or change (“Drop the study of those shipping lines and focus on rail transport”) the emphasis of analysis and collection as the team gains a greater understanding of the subject. This reporting-refinement collaboration (paths 6 and 7, respectively) precedes publication of analyzed data (e.g., annotated images, annotated foreign reports on trafficking, maps of known and suspect trafficking routes, and lists of known and suspect trafficking organizations) into the analysis base.
- All-source analysis collaboration. The all-source team may allocate components of the trafficking-analysis task to individuals with areas of subject matter specialties (e.g., topical components might include organized crime, trafficking routes, finances, and methods), but all contribute to the construction of a single picture of illegal trafficking. The team shares raw and analyzed data in the analysis base, as well as the intelligence products in progress in a collaborative workspace. The LDR approves all product components for release onto the digital production system, which places them onto the intelligence portal for the consumer.
In the initial days, the portal is populated with an initial library of related subject matter data (e.g., open source and intelligence reports and data on illegal trafficking in general). As the analysis proceeds, analytic results are posted to the portal,
4.4 Organizational Problem Solving
Intelligence organizations face a wide range of problems that require planning, searching, and explanation to provide solutions. These problems require reactive solution strategies to respond to emergent situations as well as opportunistic (proactive) strategies to identify potential future problems to be solved (e.g., threat assessments, indications, and warnings).
The process of solving these problems collaboratively requires a defined strategy for groups to articulate a problem and then proceed to collectively develop a solution. In the context of intelligence analysis, organizational problem solving focuses on the following kinds of specific problems:
- Planning. Decomposing intelligence needs for data requirements, developing analysis-synthesis procedures to apply to the collected data to draw conclusions, and scheduling the coordinated collection of data to meet those requirements
- Discovery. Searching and identifying previously unknown patterns (of objects, events, behaviors, or relationships) that reveal new understanding about intelligence targets. (The discovery reasoning approach is inductive in nature, creating new, previously unrevealed hypotheses.)
- Detection. Searching and matching evidence against previously known target hypotheses (templates). (The detection reasoning approach is deductive in nature, testing evidence against known hypotheses.)
- Explanation. Estimating (providing mathematical proof in uncertainty) and arguing (providing logical proof in uncertainty) are required to provide an explanation of evidence. Inferential strategies require the description of multiple hypotheses (explanations), the confidence in each one, and the rationale for justifying a decision. Problem-solving descriptions may include the explanation of explicit knowledge via technical portrayals (e.g., graphical representations) and tacit knowledge via narrative (e.g., dialogue and story).
To perform organizational (or collaborative) problem solving in each of these areas, the individuals in the organization must share an awareness of the reasoning and solution strategies embraced by the organization. In each of these areas, organizational training, formal methodologies, and procedural templates provide a framework to guide the thinking process across a group. These methodologies also form the basis for structuring collaboration tools to guide the way teams organize shared knowledge, structure problems, and proceed from problem to solution.
Collaborative intelligence analysis is a difficult form of collaborative problem solving, where the solution often requires the analyst to overcome the efforts of a subject of study (the intelligence target) to both deny the analyst information and provide deliberately deceptive information.
4.4.1 Critical, Structured Thinking
Critical, or structured, thinking is rooted in the development of methods of careful, structured thinking, following the legacy of the philosophers and theologians that diligently articulated their basis for reasoning from premises to conclusions.
Critical thinking is based on the application of a systematic method to guide the collection of evidence, reason from evidence to argument, and apply objective decision-making judgment (Table 4.10). The systematic methodology assures completeness (breadth of consideration), objectivity (freedom from bias in sources, evidence, reasoning, or judgment), consistency (repeatability over a wide range of problems), and rationality (consistency with logic). In addition, critical thinking methodology requires the explicit articulation of the reasoning process to allow review and critique by others. These common methodologies form the basis for academic research, peer review, and reporting—as well as for intelligence analysis and synthesis.
structured methods that move from problem to solution provide a helpful common framework for groups to communicate knowledge and coordi- nate a process from problem to solution. The TQM initiatives of the 1980s expanded the practice of teaching entire organizations common strategies for articulating problems and moving toward solutions. A number of general problem-solving strategies have been developed and applied to intelligence applications, for example (moving from general to specific):
- Kepner-TregoeTM. This general problem-solving methodology, introduced in the classic text The Rational Manager [38] and taught to generations of managers in seminars, has been applied to management, engineering, and intelligence-problem domains. This method carefully distinguishes problem analysis (specifying deviations from expectations, hypothesizing causes, and testing for probable causes) and decision analysis (establishing and classifying decision objectives, generating alternative decisions, and comparing consequences).
- Multiattribute utility analysis (MAUA). This structured approach to decision analysis quantifies a utility function, or value of all decision factors, as a weighted sum of contributing factors for each alternative decision. Relative weights of each factor sum to unity so the overall utility scale (for each decision option) ranges from 0 to 1.
- Alternative competing hypotheses (ACH). This methodology develops and organizes alternative hypotheses to explain evidence, evaluates the evidence across multiple criteria, and provides rationale for reasoning to the best explanation.
- Lockwood analytic method for prediction (LAMP). This methodology exhaustively structures and scores alternative futures hypotheses for complicated intelligence problems with many factors. The process enumerates, then compares the relative likelihood of COAs for all actors (e.g., military or national leaders) and their possible outcomes. The method provides a structure to consider all COAs while attempting to minimize the exponential growth of hypotheses.
A basic problem-solving process flow (Figure 4.7), which encompasses the essence of each of these three approaches, includes five fundamental component stages:
- Problem assessment. The problem must be clearly defined, and criteria for decision making must be established at the beginning. The problem, as well as boundary conditions, constraints, and the format of the desired solution, is articulated.
- Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
- Alternative analysis. In concert with problem decomposition, alterna- tive solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are catego- rized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and develop- ment, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor ana- lyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and
- Problem decomposition. The problem is broken into components by modeling the “situation” or context of the problem. If the problem is a corporate need to understand and respond to the research and develop- ment initiatives of a particular foreign company, for example, a model of that organization’s financial operations, facilities, organizational structure (and research and development staffing), and products is con- structed. The decomposition (or analysis) of the problem into the need for different kinds of information necessarily requires the composition (or synthesis) of the model. This models the situation of the problem and provides the basis for gathering more data to refine the problem (refine the need for data) and better understand the context.
- Alternative analysis. In concert with problem decomposition, alternative solutions (hypotheses) are conceived and synthesized. Conjecture and creativity are necessary in this stage; the set of solutions are categorized to describe the range of the solution space. In the example of the problem of understanding a foreign company’s research and development, these solutions must include alternative explanations of what the competitor might be doing and what business responses should be taken to respond if there is a competitive threat. The competitor analyst must explore the wide range of feasible solutions and associated constraints and variables; alternatives may range from no research and development investment to significant but hidden investment in a new, breakthrough product development. Each solution (or explanation, in this case) must be compared to the model, and this process may cause the scope of the model to be expanded in scope, refined, and further decomposed to smaller components.
- Decision analysis. In this stage the alternative solutions are applied to the model of the situation to determine the consequences of each solution. In the foreign firm example, consequences are related to both the likelihood of the hypothesis being true and the consequences of actions taken. The decision factors, defined in the first stage, are applied to evaluate the performance, effectiveness, cost, and risk associated with each solution. This stage also reveals the sensitivity of the decision factors to the situation model (and its uncertainties) and may send the analyst back to gather more information about the situation to refine the model [42].
- Solution evaluation. The final stage, judgment, compares the outcome of decision analysis with the decision criteria established at the onset. Here, the uncertainties (about the problem, the model of the situation, and the effects of the alternative solutions) are considered and other subjective (tacit) factors are weighed to arrive at a solution decision.
This approach underlies the basis for traditional analytic intelligence methods, because it provides structure, rationale, and formality. But most recognize that the solid tacit knowledge of an experienced analyst provides a complementary basis—or an unspoken confidence that underlies final decisions—that is recognized but not articulated as explicitly as the quantified decision data.
4.4.2 Systems Thinking
In contrast with the reductionism of a purely analytic approach, a more holistic approach to understanding complex processes acknowledges the inability to fully decompose many complex problems into a finite and complete set of linear processes and relationships. This approach, referred to as holism, seeks to understand high-level patterns of behavior in dynamic or complex adaptive systems that transcend complete decomposition (e.g., weather, social organizations, or large-scale economies and ecologies). Rather than being analytic, systems approaches tend to syn- thetic—that is, these approaches construct explanations at the aggregate or large scale and compare them to real-world systems under study.
Complexity refers the property of real-world systems that prohibits any formalism to represent or completely describe its behavior. In contrast with simple systems that may be fully described by some formalism (i.e., mathematical equations that fully describe a real-world process to some level of satisfaction for the problem at hand), complex systems lack a fully descriptive formalism that captures all of their properties, especially global behavior.
systems of subatomic scale, human organizational systems, and large-scale economies, where very large numbers of independent causes interact in large numbers of interactive ways, are characterized by inability to model global behavior—and a frustrating inability to predict future behavior.
The expert’s judgment is based not on an external and explicit decomposition of the problem, but on an internal matching of high-level patterns of prior experience with the current situation. The experienced detective as well as the experienced analyst applies such high-level comparisons of current behaviors with previous tacit (unarticulated, even unconscious) patterns gained through experience.
It is important to recognize that analytic and systems-thinking approaches, though in contrast, are usually applied in a complementary fashion by individuals and team alike. The analytic approach provides the structure, record keeping, and method for articulating decision rationale, while the systems approach guides the framing of the problem, provides the synoptic perspective for exploring alternatives, and provides confidence in judgments.
4.4.3 Naturalistic Decision Making
in times of crisis, when time does not permit the careful methodologies, humans apply more naturalistic methods that, like the systems-thinking mode, rely entirely on the only basis available—prior experience.
Uncontrolled, [information] will control you and your staffs … and lengthen your decision-cycle times.” (Insightfully, the Admiral also noted, “You can only manage from your Desktop Computer … you cannot lead from it”
While long-term intelligence analysis applies the systematic, critical analytic approaches described earlier, crisis intelligence analy- sis may be forced to the more naturalistic methods, where tacit experience (via informal on-the-job learning, simulation, or formal learning) and confidence are critical.
4.5 Tradecraft: The Best Practices of Intelligence
The capture and sharing of best practices was developed and matured through- out the 1980s when the total quality movement institutionalized the processes of benchmarking and recording lessons learned. Two forms of best practices and lessons capture and recording are often cited:
- Explicit process descriptions. The most direct approach is to model and describe the best collection, analytic, and distribution processes, their performance properties, and applications. These may be indexed, linked, and organized for subsequent reuse by a team posed with simi- lar problems and instructors preparing formal curricula.
- Tacit learning histories. The methods of storytelling, described earlier in this chapter, are also applied to develop a “jointly told” story by the team developing the best practice. Once formulated, such learning histories provide powerful tools for oral, interactive exchanges within the organization; the written form of the exchanges may be linked to the best-practice description to provide context.
While explicit best-practices databases explain the how, learning histories provide the context to explain the why of particular processes.
The CIA maintains a product evaluation staff to evaluate intelligence products, learn from the large range of products produced (estimates, forecasts, technical assessments, threat assessments, and warnings) and maintains the database of best practices for training and distribution to the analytic staff.
4.6 Summary
In this chapter, we have introduced the fundamental cultural qualities, in terms of virtues and disciplines that characterize the knowledge-based intelligence organization. The emphasis has necessarily been on organizational disciplines—learning, collaborating, problem solving—that provide the agility to deliver accurate and timely intelligence products in a changing environment. The virtues and disciplines require support—technology to support collaboration over time and space, to support the capture and retrieval of explicit knowledge, to enable the exchange of tacit knowledge, and to support the cognitive processes in analytic and holistic problem solving.
5
Principles of Intelligence Analysis and Synthesis
At the core of all knowledge creation are the seemingly mysterious reasoning processes that proceed from the known to the assertion of entirely new knowledge about the previously unknown. For the intelligence analyst, this is the process by which evidence [1], that data deter- mined to be relevant to a problem, is used to infer knowledge about a subject of investigation—the intelligence target. The process must deal with evidence that is often inadequate, undersampled in time, ambiguous, and carries questionable pedigree.
We refer to this knowledge-creating discipline as intelligence analysis and the practitioner as analyst. But analysis properly includes both the processes of analysis (breaking things down) and synthesis (building things up).
5.1 The Basis of Analysis and Synthesis
The process known as intelligence analysis employs both the functions of analysis and synthesis to produce intelligence products.
In a criminal investigation, this leads from a body of evidence, through feasible explanations, to an assembled case. In intelligence, the process leads from intelligence data, through alternative hypotheses, to an intelligence product. Along this trajectory, the problem solver moves forward and backward iteratively seeking a path that connects the known to the solution (that which was previously unknown).
Intelligence analysis-synthesis is very interested in financial, political, economic, military, and many other evidential relationships that may not be causal, but provide understanding of the structure and behavior of human, organizational, physical, and financial entities.
Descriptions of the analysis-synthesis processes can be traced from its roots in philosophy and problem solving to applications in intelligence assessments.
Philosophers distinguish between propositions as analytic or synthetic based on the direction in which they are developed. Propositions in which the predicate (conclusion) is contained within the subject are called analytic because the predicate can be derived directly by logical reasoning forward from the subject; the subject is said to contain the solution. Synthetic propositions on the other hand have predicates and subjects that are independent. The synthetic proposition affirms a connection between otherwise independent concepts.
The empirical scientific method applies analysis and synthesis to develop and then to test hypotheses:
- Observation. A phenomenon is observed and recorded as data.
- Hypothesis creation. Based upon a thorough study of the data, a working hypothesis is created (by the inductive analysis process or by pure inspi- ration) to explain the observed phenomena.
- Experiment development. Based on the assumed hypothesis, the expected results (the consequences) of a test of the hypothesis are synthesized (by deduction).
- Hypothesis testing. The experiment is performed to test the hypothesis against the data.
- When the consequences of the test are confirmed, the hypothesis is verified (as a theory or law depending upon the degree of certainty).
The analyst iteratively applies analysis and synthesis to move forward from evidence and backward from hypothesis to explain the available data (evidence). In the process, the analyst identifies more data to be collected, critical missing data, and new hypotheses to be explored. This iterative analysis-synthesis process provides the necessary traceability from evidence to conclusion that will allow the results (and the rationale) to be explained with clarity and depth when completed.
5.2 The Reasoning Processes
Reasoning processes that analyze evidence and synthesize explanations perform inference (i.e., they create, manipulate, evaluate, modify, and assert belief). We can characterize the most fundamental inference processes by their process and products:
- Process. The direction of the inference process refers to the way in which beliefs are asserted. The process may move from specific (or particular) beliefs toward more general beliefs, or from general beliefs to assert more specific beliefs.
- Products. The certainty associated with an inference distinguishes two categories of results of inference. The asserted beliefs that result from inference may be infallible (e.g., an analytic conclusion is derived from infallible beliefs and infallible logic is certain) or fallible judgments (e.g., a synthesized judgment is asserted with a measure of uncertainty; “probably true,” “true with 0.95 probability,” or “more likely true than false”).
5.2.1 Deductive Reasoning
Deduction is the method of inference by which a conclusion is inferred by applying the rules of a logical system to manipulate statements of belief to form new logically consistent statements of belief. This form of inference is infallible, in that the conclusion (belief) must be as certain as the premise (belief). It is belief preserving in that conclusions reveal no more than that expressed in the original premises. Deduction can be expressed in a variety of syllogisms, including the more common forms of propositional logic.
5.2.2 Inductive Reasoning
Induction is the method of inference by which a more general or more abstract belief is developed by observing a limited set of observations or instances.
Induction moves from specific beliefs about instances to general beliefs about larger and future populations of instances. It is a fallible means of inference.
The form of induction most commonly applied to extend belief from a sample of instances to a larger population, is inductive generalization:
By this method, analysts extend the observations about a limited number of targets (e.g., observations of the money laundering tactics of several narcotics rings within a drug cartel) to a larger target population (e.g., the entire drug cartel).
Inductive prediction extends belief from a population to a specific future sample.
By this method, an analyst may use several observations of behavior (e.g., the repeated surveillance behavior of a foreign intelligence unit) to create a general detection template to be used to detect future surveillance activities by that or other such units. The induction presumes future behavior will follow past patterns.
In addition to these forms, induction can provide a means of analogical reasoning (induction on the basis of analogy or similarity) and inference to relate cause and effect. The basic scientific method applies the principles of induction to develop hypotheses and theories that can subsequently be tested by experimentation over a larger population or over future periods of time. The subject of induction is central to the challenge of developing automated systems that generalize and learn by inducing patterns and processes (rules).
Koestler uses the term bisociation to describe the process of viewing multiple explanations (or multiple associations) of the same data simultaneously. In the example in the figure, the data can be projected onto a common plane of discernment in which the data represents a simple curved line; projected onto an orthogonal plane, the data can explain a sinusoid. Though undersampled, as much intelligence data is, the sinusoid represents a new and novel explanation that may remain hidden if the analyst does not explore more than the common, immediate, or simple interpretation.
In a similar sense, the inductive discovery by an intelligence analyst (aha!) may take on many different forms, following the simple geometric metaphor. For example:
- A subtle and unique correlation between the timing of communications (by traffic analysis) and money transfers of a trading firm may lead to the discovery of an organized crime operation.
- A single anomalous measurement may reveal a pattern of denial and deception to cover the true activities at a manufacturing facility in which many points of evidence, are, in fact deceptive data “fed” by the deceiver. Only a single piece of anomalous evidence (D5 in the figure) is the clue that reveals the existence of the true operations (a new plane in the figure). The discovery of this new plane will cause the analyst to search for additional supporting evidence to support the deception hypothesis.
Each frame of discernment (or plane in Koestler’s metaphor) is a framework for creating a single or a family of multiple hypotheses to explain the evidence. The creative analyst is able to entertain multiple frames of discernment, alternatively analyzing possible “fits” and constructing new explanations, exploring the many alternative explanations. This is Koestler’s constructive-destructive process of discovery.
Collaborative intelligence analysis (like collaborative scientific discovery) may produce a healthy environment for creative induction or an unhealthy competitive environment that stifles induction and objectivity. The goal of collaborative analysis is to allow alternative hypotheses to be conceived and objectively evaluated against the available evidence and to guide the tasking for evidence to confirm or disconfirm the alternatives.
5.2.3 Abductive Reasoning
Abduction is the informal or pragmatic mode of reasoning to describe how we “reason to the best explanation” in everyday life. Abduction is the practical description of the interactive use of analysis and synthesis to arrive at a solution or explanation creating and evaluating multiple hypotheses.
Unlike infallible deduction, abduction is fallible because it is subject to errors (there may be other hypotheses not considered or another hypothesis, however unlikely, may be correct). But unlike deduction, it has the ability to extend belief beyond the original premises. Peirce contended that this is the logic of discovery and is a formal model of the process that scientists apply all the time.
Consider a simple intelligence example that implements the basic abduc- tive syllogism. Data has been collected on a foreign trading company, TraderCo, which indicates its reported financial performance is not consistent with (less than) its level of operations. In addition, a number of its executives have subtle ties with organized crime figures.
The operations of the company can be explained by at least three hypotheses:
Hypothesis (H1)—TraderCo is a legitimate but poorly run business; its board is unaware of a few executives with unhealthy business contacts.
Hypothesis (H2)—TraderCo is a legitimate business with a naïve board that is unaware that several executives who gamble are using the business to pay off gambling debts to organized crime.
Hypothesis (H3)—TraderCo is an organized crime front operation that is trading in stolen goods and laundering money through the business, which reports a loss.
Hypothesis H3 best explains the evidence.
∴ Therefore, Accept Hypothesis H3 as the best explanation.
Of course, the critical stage of abduction unexplained in this set of hypotheses is the judgment that H3 is the best explanation. The process requires a criteria for ranking hypotheses, a method for judging which is best, and a method to assure that the set of candidate hypotheses cover all possible (or feasible) explanations.
5.2.3.1 Creating and Testing Hypotheses
Abduction introduces the competition among multiple hypotheses, each being an attempt to explain the evidence available. These alternative hypotheses can be compared, or competed on the basis of how well they explain (or fit) the evidence. Furthermore, the created alternative hypotheses provide a means of identifying three categories of evidence important to explanation:
- Positive evidence. This is evidence revealing the presence of an object or occurrence of an event in a hypothesis.
- Missing evidence. Some hypotheses may fit the available evidence, but the hypothesis “predicts” that additional evidence that should exist if the hypothesis were true is “missing.” Subsequent searches and testing for this evidence may confirm or disconfirm the hypothesis.
- Negative evidence. Hypotheses that contain evidence of a nonoccurrence of an event (or nonexistence of an object) may confirm a hypothesis.
5.2.3.2 Hypothesis Selection
Abduction also poses the issue of defining which hypothesis provides the best explanation of the evidence. The criteria for comparing hypotheses, at the most fundamental level, can be based on two principle approaches established by philosophers for evaluating truth propositions about objective reality [18]. The correspondence theory of the truth of a proposition p is true is to maintain that “p corresponds to the facts.”
For the intelligence analyst this would equate to “hypothesis h corresponds to the evidence”—it explains all of the pieces of evidence, with no expected evidence missing, all without having to leave out any contradictory evidence. The coherence theory of truth says that a proposition’s truth consists of its fitting into a coherent system of propositions that create the hypothesis. Both concepts contribute to practical criteria for evaluating competing hypotheses
5.3 The Integrated Reasoning Process
The analysis-synthesis process combines each of the fundamental modes of reasoning to accumulate, explore, decompose to fundamental elements, and then fit together evidence. The process also creates hypothesized explanations of the evidence and uses these hypotheses to search for more confirming or refuting elements of evidence to affirm or prune the hypotheses, respectively.
This process of proceeding from an evidentiary pool to detections, explanations, or discovery has been called evidence marshaling because the process seeks to marshal (assemble and organize) into a representation (a model) that:
- Detects the presence of evidence that match previously known premises (or patterns of data);
- Explains underlying processes that gave rise to the evidence;
- Discovers new patterns in the evidence—patterns of circumstances or behaviors not known before (learning).
The figure illustrates four basic paths that can proceed from the pool of evidence, our three fundamental inference modes and a fourth feedback path:
- Deduction. The path of deduction tests the evidence in the pool against previously known patterns (or templates) that represent hypotheses of activities that we seek to detect. When the evidence fits the hypothesis template, we declare a match. When the evidence fits multiple hypotheses simultaneously, the likelihood of each hypothesis (determined by the strength of evidence for each) is assessed and reported. (This likelihood may be computed probabilistically using Bayesian methods, where evidence uncertainty is quantified as a probability and prior probabilities of the hypotheses are known.)
- Retroduction. This feedback path, recognized and named by C.S. Peirce as yet another process of reasoning, occurs when the analyst conjectures (synthesizes) a new conceptual hypothesis (beyond the cur- rent frame of discernment) that causes a return to the evidence to seek evidence to match (or test) this new hypothesis. The insight Peirce provided is that in the testing of hypotheses, we are often inspired to realize new, different hypotheses that might also be tested. In the early implementation of reasoning systems, the forward path of deduction was often referred to as forward chaining by attempting to automatically fit data to previously stored hypothesis templates; the path of retroduction was referred to as backward chaining, where the system searched for data to match hypotheses queried by an inspired human operator.
- Abduction. The abduction process, like induction, creates explanatory hypotheses inspired by the pool evidence and then, like deduction, attempts to fit items of evidence to each hypothesis to seek the best explanation. In this process, the candidate hypotheses are refined and new hypotheses are conjectured. The process leads to comparison and ranking of the hypotheses, and ultimately the best is chosen as the explanation. As a part of the abductive process, the analyst returns to the pool of evidence to seek support for these candidate explanations; this return path is called retroduction.
- Induction. The path of induction considers the entire pool of evidence to seek general statements (hypotheses) about the evidence. Not seeking point matches to the small sets of evidence, the inductive path conjectures new and generalized explanation of clusters of similar evidence; these generalizations may be tested across the evidence to determine the breadth of applicability before being declared as a new discovery.
5.4 Analysis and Synthesis As a Modeling Process
The fundamental reasoning processes are applied to a variety of practical ana- lytic activities performed by the analyst.
- Explanation and description. Find and link related data to explain entities and events in the real world.
- Detection. Detect and identify the presence of entities and events based on known signatures. Detect potentially important deviations, including anomaly detection of changes relative to “normal” or “expected” state or change detection of changes or trends over time.
- Discovery. Detect the presence of previously unknown patterns in data (signatures) that relate to entities and events.
- Estimation. Estimate the current qualitative or quantitative state of an entity or event.
- Prediction. Anticipate future events based on detection of known indicators; extrapolate current state forward, project the effects of linear fac- tors forward, or simulate the effects of complex factors to synthesize possible future scenarios to reveal anticipated and unanticipated (emergent) futures.
In each of these cases, we can view the analysis-synthesis process as an evidence-decomposing and model-building process.
The objective of this process is to sort through and organize data (analyze) and then to assemble (synthesize), or marshal related evidence to create a hypothesis—an instantiated model that represents one feasible representation of the intelligence subject (target). The model is used to marshal evidence, evaluate logical argumentation, and provide a tool for explanation of how the available evidence best fits the analyst’s conclusion. The model also serves to help the analyst understand what evidence is missing, what strong evidence supports the model, and where negative evidence might be expected. The terminology we use here can be clarified by the following distinctions:
- A real intelligence target is abstracted and represented by models.
- A model has descriptive and stated attributes or properties.
- A particular instance of a model, populated with evidence-derived and conjectured properties, is a hypothesis.
A target may be described by multiple models, each with multiple instances (hypotheses). For example, if our target is the financial condition of a designated company, we might represent the financial condition with a single financial model in the form of a spreadsheet that enumerates many financial attributes. As data is collected, the model is populated with data elements, some reported publicly and others estimated. We might maintain three instances of the model (legitimate company, faltering legitimate company, and illicit front organization), each being a competing explanation (or hypothesis) of the incomplete evidence. These hypotheses help guide the analyst to identify the data required to refine, affirm, or discard existing hypotheses or to create new hypotheses.
Explicit model representations provide a tool for collaborative construction, marshaling of evidence, decomposition, and critical examination. Mental and explicit modeling are complementary tools of the analyst; judgment must be applied to balance the use of both.
Former U.S. National Intelligence Officer for Warning (1994–1996) Mary McCarthy has emphasized the importance of the explicit modeling to analysis:
Rigorous analysis helps overcome mindset, keeps analysts who are immersed in a mountain of new information from raising the bar on what they would consider an alarming threat situation, and allows their minds to expand other possibilities. Keeping chronologies, maintaining databases and arraying data are not fun or glamorous. These techniques are the heavy lifting of analysis, but this is what analysts are supposed to do [19].
The model is an abstract representation that serves two functions:
- Model as hypothesis. Based on partial data or conjecture alone, a model may be instantiated as a feasible proposition to be assessed, a hypothesis. In a homicide investigation, each conjecture for “who did it” is a hypothesis, and the associated model instance is a feasible explanation for “how they did it.” The model provides a framework around which data is assembled, a mechanism for examining feasibility, and a basis for exploring data to confirm or refute the hypothesis.
- Model as explanation. As evidence (relevant data that fits into the model) is assembled on the general model framework to form a hypothesis, different views of the model provide more robust explanations of that hypothesis. Narrative (story), timeline, organization relationships, resources, and other views may be derived from a common model.
The process of implementing data decomposition (analysis) and model construction-examination (synthesis) can be depicted in three process phases or spaces of operation (Figure 5.6):
- Data space. In this space, data (relevant and irrelevant, certain and ambiguous) are indexed and accumulated. Indexing by time (of collection and arrival), source, content topic, and other factors is performed to allow subsequent search and access across many dimensions.
- Argumentation space. The data is reviewed; selected elements of potentially relevant data (evidence) are correlated, grouped, and assembled into feasible categories of explanations, forming a set (structure) of high-level hypotheses to explain the observed data. This process applies exhaustive searches of the data space, accepting some as relevant and discarding others. In this phase, patterns in the data are dis- covered, although all the data in the patterns may not be present; these patterns lead to the creation of hypotheses even though all the data may not exist. Examination of the data may lead to creation of hypotheses by conjecture, even though no data supports the hypothesis at this point. The hypotheses are examined to determine what data would be required to reinforce or reject each; hypotheses are ranked in terms of likelihood and needed data (to reinforce or refute). The models are tested and various excursions are examined. This space is the court in which the case is made for each hypothesis, and they are judged for completeness, sufficiency, and feasibility. This examination can lead to requests for additional data, refinements of the current hypotheses, and creation of new hypotheses.
- Explanation space. Different “views” of the hypothesis model provide explanations that articulate the hypothesis and relate the supporting evidence. The intelligence report can include a single model and explanation that best fits the data (when data is adequate to assert the single answer) or alternative competing models, as well as the sup- porting evidence for each and an assessment of the implications of each. Figure 5.6 illustrates several of the views often used: timelines of events, organization-relationship diagrams, annotated maps and imagery, and narrative story lines.
For a single target under investigation, we may create and consider (or entertain) several candidate hypotheses, each with a complete set of model views. If, for example, we are trying to determine the true operations of the foreign company introduced earlier, TradeCo, we may hold several hypotheses:
- H1—The company is a legal clothing distributor, as advertised.
- H2 —The company is a legal clothing distributor, but company executives are diverting business funds for personal interests.
- H3—The company is a front operation to cover organized crime, where hypothesis 3 has two sub-hypotheses:
- H31—The company is a front for drug trafficking.
• H32—The company is a front for terrorism money laundering.
In this case, H1, H2, H31, and H32 are the four root hypotheses, and the analyst identifies the need to create an organizational model, an operations flow-process model, and a financial model for each of the four hypotheses—creating 4 × 3 = 12 models.
5.5 Intelligence Targets in Three Domains
We have noted that intelligence targets may be objects, events, or dynamic processes—or combinations of these. The development of information operations has brought a greater emphasis on intelligence targets that exist not only in the physical domain, but in the realms of information (e.g., networked computers and information processes) and human decision making.
Information operations (IO) are those actions taken to affect an adversary’s information and information systems, while defending one’s own information and information systems. The U.S. Joint Vision 2020 describes the Joint Chiefs of Staff view of the ultimate purpose of IO as “to facilitate and protect U.S. decision-making processes, and in a conflict, degrade those of an adversary”.
The JV2020 builds on the earlier JV2010 [26] and retains the fundamental operational concepts, two with significant refinements that emphasize IO. The first is the expansion of the vision to encompass the full range of operations (nontraditional, asymmetric, unconventional ops), while retaining warfighting as the primary focus. The second refinement moves information superiority concepts beyond technology solutions that deliver information to the concept of superiority in decision making. This means that IO will deliver increased information at all levels and increased choices for commanders. Conversely, it will also reduce information to adversary commanders and diminish their decision options. Core to these concepts and challenges is the notion that IO uniquely requires the coordination of intelligence, targeting, and security in three fundamental realms, or domains of human activities.
These are likewise the three fundamental domains of intelligence targets, and each must be modeled:
- The physical domain encompasses the material world of mass and energy. Military facilities, vehicles, aircraft, and personnel make up the principal target objects of this domain. The orders of battle that measure military strength, for example, are determined by enumerating objects of the physical world.
- The abstract symbolic domain is the realm of information. Words, numbers, and graphics all encode and represent the physical world, storing and transmitting it in electronic formats, such as radio and TV signals, the Internet, and newsprint. This is the domain that is expanding at unprecedented rates, as global ideas, communications, and descriptions of the world are being represented in this domain. The domain includes the cyberspace that has become the principal means by which humans shape their perception of the world. It interfaces the physical to the cognitive domains.
- The cognitive domain is the realm of human thought. This is the ultimate locus of all information flows. The individual and collective thoughts of government leaders and populations at large form this realm. Perceptions, conceptions, mental models, and decisions are formed in this cognitive realm. This is the ultimate target of our adversaries: the realm where uncertainties, fears, panic, and terror can coerce and influence our behavior.
Current IO concepts have appropriately emphasized the targeting of the second domain—especially electronic information systems and their information content. The expansion of networked information systems and the reliance on those systems has focused attention on network-centric forms of warfare. Ultimately, though, IO must move toward a focus on the full integration of the cognitive realm with the physical and symbolic realms to target the human mind
Intelligence must understand and model the complete system or complex of the targets of IO: the interrelated systems of physical behavior, information perceived and exchanged, and the perception and mental states of decision makers.
Of importance to the intelligence analyst is the clear recognition that most intelligence targets exist in all three domains, and models must consider all three aspects.
The intelligence model of such an organization must include linked models of all three domains—to provide an understanding of how the organization perceives, decides, and communicates through a networked organization, as well as where the people and other physical objects are moving in the physical world. The concepts of detection, identification, and dynamic tracking of intelligence targets apply to objects, events, and processes in all three domains.
5.6 Summary
the analysis-synthesis process proceeds from intelligence analysis to operations analysis and then to policy analysis.
The knowledge-based intelligence enterprise requires the capture and explicit representation of such models to permit collaboration among these three disciplines to achieve the greatest effectiveness and sharing of intellectual capital.
6
The Practice of Intelligence Analysis and Synthesis
The chapter moves from high-level functional flow models toward the processes implemented by analysts.
A practical description of the process by one author summarizes the perspective of the intelligence user:
A typical intelligence production consists of all or part of three main elements: descriptions of the situation or event with an eye to identifying its essential characteristics; explanation of the causes of a development as well as its significance and implications; and the prediction of future developments. Each element contains one or both of these components: data, pro- vided by knowledge and incoming information and assessment, or judgment, which attempts to fill the gaps in the data
Consumers expect description, explanation, and prediction; as we saw in the last chapter, the process that delivers such intelligence is based on evidence (data), assessment (analysis-synthesis), and judgment (decision).
6.1 Intelligence Consumer Expectations
The U.S. Government Accounting Office (GAO) noted the need for greater clarity in the intelligence delivered in U.S. national intelligence estimates (NIEs) in a 1996 report, enumerating five specific standards for analysis, from the perspective of policymakers.
Based on a synthesis of the published views of current and former senior intelligence officials, the reports of three independent commissions, and a CIA publication that addressed the issue of national intelligence estimating, an objective NIE should meet the following standards [2]:
- [G1]: quantify the certainty level of its key judgments by using percentages or bettors’ odds, where feasible, and avoid overstating the certainty of judgments (note: bettors’ odds state the chance as, for example, “one out of three”);
- [G2]: identify explicitly its assumptions and judgments;
- [G3]: develop and explore alternative futures: less likely (but not impossible) scenarios that would dramatically change the estimate if they occurred;
- [G4]: allow dissenting views on predictions or interpretations;
- [G5]: note explicitly what the IC does not know when the information gaps could have significant consequences for the issues under consideration.
The Commission would urge that the [IC] adopt as a standard of its meth- odology that in addition to considering what they know, analysts consider as well what they know they don’t know about a program and set about fill- ing gaps in their knowledge by:
- [R1] taking into account not only the output measures of a program, but the input measures of technology, expertise and personnel from both internal sources and as a result of foreign assistance. The type and rate of foreign assis- tance can be a key indicator of both the pace and objective of a program into which the IC otherwise has little insight.
- [R2] comparing what takes place in one country with what is taking place in others, particularly among the emerging ballistic missile powers. While each may be pursuing a somewhat different development program, all of them are pursuing programs fundamentally different from those pursued by the US, Russia and even China. A more systematic use of comparative methodologies might help to fill the information gaps.
- [R3] employing the technique of alternative hypotheses. This technique can help make sense of known events and serve as a way to identify and organize indicators relative to a program’s motivation, purpose, pace and direction. By hypothesizing alternative scenarios a more adequate set of indicators and col- lection priorities can be established. As the indicators begin to align with the known facts, the importance of the information gaps is reduced and the likely outcomes projected with greater confidence. The result is the possibility for earlier warning than if analysts wait for proof of a capability in the form of hard evidence of a test or a deployment. Hypothesis testing can provide a guide to what characteristics to pursue, and a cue to collection sensors as well.
- [R4] explicitly tasking collection assets to gather information that would dis- prove a hypothesis or fill a particular gap in a list of indicators. This can prove a wasteful use of scarce assets if not done in a rigorous fashion. But moving from the highly ambiguous absence of evidence to the collection of specific evidence of absence can be as important as finding the actual evidence [3].
intelligence consumers want more than estimates or judgments; they expect concise explanations of the evidence and reasoning processes behind judgments with substantiation that multiple perspectives, hypotheses, and consequences have been objectively considered.
They expect a depth of analysis-synthesis that explicitly distinguishes assumptions, evidence, alternatives, and consequences—with a means of quantifying each contribution to the outcomes (judgments).
6.2 Analysis-Synthesis in the Intelligence Workflow
Analysis-synthesis is one process within the intelligence cycle… It represents a process that is practically implemented as a continuum rather than a cycle, with all phases being implemented concurrently and addressing a multitude of different intelligence problems or targets.
The stimulus-hypothesis-option-response (SHOR) model, described by Joseph Wohl in 1986, emphasizes the consideration of multiple perception hypotheses to explain sensed data and assess options for response.
The observe-orient-decide-act (OODA) loop, developed by Col. John Warden, is a high-level abstraction of the military command and control loop that considers the human decision-making role and its dependence on observation and orientation—the process of placing the observations in perceptual framework for decision making.
The tasking, processing, exploitation, dissemination (TPED) model used by U.S. technical collectors and processors [e.g., the U.S. National Reconnaissance Office (NRO), the National Imagery and Mapping Agency (NIMA), and the National Security Agency (NSA)] distinguishes between the processing elements of the national technical-means intelligence channels (SIGINT, IMINT, and MASINT) and the all-source analytic exploitation roles of the CIA and DIA.
The DoD Joint Directors of Laboratories (JDL) data fusion model is a more detailed technical model that considers the use of multiple sources to produce a common operating picture of individual objects, situations (the aggregate of objects and their behaviors), and the consequences or impact of those situations. The model includes a hierarchy of data correlation and combination processes at three levels (level 0: signal refinement; level 1: object refinement; level 2: situation refinement; level 3: impact refinement) and a corresponding feedback control process (level 4: process refinement) [10]. The JDL model is a functional representation that accommodates automated processes and human processes and provides detail within both the processing and analysis steps. The model is well suited to organize the structure of automated processing stages for technical sensors (e.g., imagery, signals, and radar).
- Level 0: signal refinement automated processing correlates and combines raw signals (e.g., imagery pixels or radar signals intercepted from multiple locations) to detect objects and derive their location, dynamics, or identity.
- Level 1: object refinement processing detects individual objects and correlates and combines these objects across multiple sources to further refine location, dynamics, or identity information.
- Level 2: situation refinement analysis correlates and combines the detected objects across all sources within the background context to produce estimates of the situation—explaining the aggregate of static objects and their behaviors in context to derive an explanation of activities with estimated status, plans, and intents.
- Level 3: impact refinement analysis estimates the consequences of alternative courses of action.
- The level 4 process refinement flows are not shown in the figure, though all forward processing levels can provide inputs to refine the process to: focus collection or processing on high-value targets, refine processing parameters to filter unwanted content, adjust database indexing of intermediate data, or improve overall efficiency of the production process. The level 4 process effectively performs the KM business intelligence functions introduced in Section 3.7.
The analysis stage employs semiautomated detection and discovery tools to access the data in large databases produced by the processing stage. In general, the processing stage can be viewed as a factory of processors, while the analysis stage is a lower volume shop staffed by craftsmen—the analytic team.
6.3 Applying Automation
Automated processing has been widely applied to level 1 object detection (e.g., statistical pattern recognition) and to a lesser degree to level 2 situation recognition problems (e.g., symbolic artificial intelligence systems) for intelligence applications.
Viewing these dimensions as the number of nodes (causes) and number of interactions (influencing the scale of effects) in a dynamic system, the problem space depicts the complexity of the situation being analyzed:
- Causal diversity. The first dimension relates to the number of causal fac- tors, or actors, that influence the situation behavior.
- Scale of effects. The second dimension relates to the degree of interaction between actors, or the degree to which causal factors influence the behavior of the situation.
As both dimensions increase, the potential for nonlinear behavior increases, making it more difficult to model the situation being analyzed.
These problems include the detection of straightforward objects in images, content patterns in text, and emitted signal matching. More difficult problems still in this category include dynamic situations with moderately higher numbers of actors and scales of effects that require qualitative (propositional logic) or quantitative (statistical modeling) reasoning processes.
The most difficult category 3 problems, intractable to fully automated analysis, are those complex situations characterized by high numbers of actors with large-scale interactions that give rise to emergent behaviors.
6.4 The Role of the Human Analyst
The analyst applies tacit knowledge to search through explicit information to create tacit knowledge in the form of mental models and explicit intelligence reports for consumers.
The analysis process requires the analyst to integrate the cognitive reasoning and more emotional sensemaking processes with large bodies of explicit information to produce explicit intelligence products for consumers. To effectively train and equip analysts to perform this process, we must recognize and account for these cognitive and emotion components of comprehension. The complete process includes the automated workflow, which processes explicit information, and the analyst’s internal mental workflow, which integrates the cognitive and emotional modes
Complementary logical and emotional frameworks are based on the current mental model of beliefs and feelings and the new information is compared to these frameworks; differences have the potential for affirming the model (agreement), learning and refining the model (acceptance and model adjustment), or rejecting the new information. Judgment integrates feelings about consequences and values (based on experience) with reasoned alternative consequences and courses of action that construct the meaning of the incoming stimulus. Decision making makes an intellectual-emotional commitment to the impact of the new information on the mental model (acceptance, affirmation, refinement, or rejection).
6.5 Addressing Cognitive Shortcomings
The intelligence analyst is not only confronted with ambiguous information about complex subjects, but is often placed under time pressures and expectations to deliver accurate, complete, and predictive intelligence. Consumer expectations often approach infallibility and omniscience.
In this situation, the analyst must be keenly aware of the vulnerabilities of human cognitive short- comings and take measures to mitigate the consequences of these deficiencies. The natural limitations in cognition (perception, attention span, short- and long-term memory recall, and reasoning capacity) constrain the objectivity of our reasoning processes, producing errors in our analysis.
In “Combatting Mind-Set,” respected analyst Jack Davis has noted that analysts must recognize the subtle influence of mindset, the cumulative mental model that distills analysts’ beliefs about a complex subject and “find[s] strategies that simultaneously harness its impressive energy and limit[s] the potential damage”.
Davis recommends two complementary strategies:
- Enhancing mindset. Creating explicit representation of the mind- set—externalizing the mental model—allows broader collaboration, evaluation from multiple perspectives, and discovery of subtle biases.
- Ensuring mind-set. Maintaining multiple explicit explanations and projections and opportunity analyses provides insurance against single-point judgments and prepares the analyst to switch to alternatives when discontinuities occur.
Davis has also cautioned analysts to beware the paradox of expertise phenomenon that can distract attention from the purpose of an analysis. This error occurs when discordant evidence is present and subject experts tend to be distracted and focus on situation analysis (solving the discordance to understand the subject situation) rather than addressing the impact on the analysis of the consequences of the discrepancy. In such cases, the analyst must focus on providing value added by addressing what action alternatives exist for alternatives and their consequences in cost-benefit terms
Heuer emphasized the importance of supporting tools and techniques to overcome natural analytic limitations [20]: “Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.”
6.6 Marshaling Evidence and Structuring Argumentation
Instinctive analysis focuses on a single or limited range of alternatives, moves on a path to satisfy minimum needs (satisficing, or finding an acceptable explanation), and is performed implicitly using tacit mental models. Structured analysis follows the principles of critical thinking introduced in Chapter 4, organizing the problem to consider all reasonable alternatives, systematically and explicitly representing the alternative solutions to comprehensively analyze all factors.
6.6.1 Structuring Hypotheses
6.6.2 Marshaling Evidence and Structuring Arguments
There exist a number of classical approaches to representing hypotheses, marshaling evidence to them, and arguing for their validity. Argumentation structures propositions to move from premises to conclusions. Three perspectives or disciplines of thought have developed the most fundamental approaches to this process.
Each discipline has contributed methods to represent knowledge and to provide a structure for reasoning to infer from data to relevant evidence, through intermediate hypotheses to conclusion. The term knowledge representation refers to the structure used to represent data and show its relevance as evidence, the representation of rules of inference, and the asserted conclusions.
6.6.3 Structured Inferential Argumentation
Philosophers, rhetoricians, and lawyers have long sought accurate means of structuring and then communicating, in natural language, the lines of reasoning, that lead from complicated sets of evidence to conclusions. Lawyers and intelligence analysts alike seek to provide a clear and compelling case for their conclusions, reasoned from a mass of evidence about a complex subject.
We first consider the classical forms of argumentation described as infor- mal logic, whereby the argument connects premises to conclusions. The com- mon forms include:
- Multiple premises, when taken together, lead to but one con- clusion. For example: The radar at location A emits at a high pulse repetition frequency (PRF); when it emits at high PRF, it emits on fre- quency (F) → the radar at A is a fire control radar.
- Multiple premises independently lead to the same conclu- sion. For example: The radar at A is a fire control radar. Also Location A stores canisters for missiles. → A surface to air missile (SAM) battery must be at location A.
- A single premise leads to but one conclusion, for example: A SAM battery is located at A the battery at A → must be linked to a command and control (C2) center.
- A single premise can support more than one conclusion. For example: The SAM battery could be controlled by the C2 center at golf, or The SAM battery could be controlled by the C2 center at hotel.
These four basic forms may be combined to create complex sets of argu- mentation, as in the simple sequential combination and simplification of these examples:
- The radar at A emits at a high PRF; when it emits at high PRF, it emits on frequency F, so it must be a fire control radar. Also, location A stores canisters for missiles, so there must be a SAM battery there. The battery at A must be linked to a C2 center. It could be controlled by the C2 centers at golf or at hotel.
The structure of this argument can be depicted as a chain of reasoning or argumentation (Figure 6.7) using the four premise structures in sequence.
Toulmin distinguished six elements of all arguments [24]:
- Data (D), at the beginning point of the argument, are the explicit elements of data (relevant data, or evidence) that are observed in the external world.
- Claim (C), is the assertion of the argument.
- Qualifier (Q), imposes any qualifications on the claim.
- Rebuttals (R) are any conditions that may refute the claim.
- Warrants (W) are the implicit propositions (rules, principles) that permit inference from data to claim.
- Backing (B) are assurances that provide authority and currency to the warrants.
Applying Toulmin’s argumentation scheme requires the analyst to distinguish each of the six elements of argument and to fit them into a standard structure of reasoning—see Figure 6.8(a)—which leads from datum (D) to claim (C). The scheme separates the domain-independent structure from the warrants and backing, which are dependent upon the field in which we are working (e.g., legal cases, logical arguments, or morals).
The general structure, described in natural language then proceeds from datum (D) to claim (I) as follows:
- The datum (D), supported by the warrant (W), which is founded upon the backing (B), leads directly to the claim (C), qualified to the degree (Q), with the caveat that rebuttal (R) is present.
Such a structure requires the analyst to identify all of the key components of the argument—and explicitly report if any components are missing (e.g., if rebuttals or contradicting evidence is not existent).
The benefits of this scheme are the potential for the use of automation to aid analysts in the acquisition, examination, and evaluation of natural-language arguments. As an organizing tool, the Toulmin scheme distinguishes data (evidence) from the warrants (the universal premises of logic) and their backing (the basis for those premises).
It must be noted that formal logicians have criticized Toulmin’s scheme due to its lack of logical rigor and ability to address probabilistic arguments. Yet, it has contributed greater insight and formality to developing structured natural-language argumentation.
6.6.4 Inferential Networks
Moving beyond Toulmin’s structure, we must consider the approaches to create network structures to represent complex chains of inferential reasoning.
The use of graph theory to describe complex arguments allows the analyst to represent two crucial aspects of an argument:
- Argument structure. The directed graph represents evidence (E), events, or intermediate hypotheses inferred by the evidence (i), and the ultimate, or final, hypotheses (H) as graph nodes. The graph is directed because the lines connecting nodes include a single arrow indicating the single direction of inference. The lines move from a source element of evidence (E) through a series of inferences (i1, i2, i3, … in) toward a terminal hypothesis (H). The graph is acyclic because the directions of all arrows move from evidence, through intermediate inferences to hypothesis, but not back again: there are no closed-loop cycles.
- Force of evidence and propagation. In common terms we refer the force, strength, or weight of evidence to describe the relative degree of contribution of evidence to support an intermediate inference (in), or the ultimate hypothesis (H). The graph structure provides a means of describing supporting and refuting evidence, and, if evidence is quantified (e.g., probabilities, fuzzy variables, or other belief functions), a means of propagating the accumulated weight of evidence in an argument.
Like a vector, evidence includes a direction (toward certain hypotheses) and a magnitude (the inferential force). The basic categories of argument can be structured to describe four basic categories of evidence combination (illustrated in Figure 6.9):
Direct. The most basic serial chain of inference moves from evidence (E) that the event E occurred, to the inference (i1) that E did in fact occur. This inference expresses belief in the evidence (i.e., belief in the veracity and objectivity of human testimony). The chain may go on serially to further inferences because of the belief in E.
Consonance. Multiple items of evidence may be synergistic resulting in one item enhancing the force of another; their joint contribution pro- vides more inferential force than their individual contributions. Two items of evidence may provide collaborative consonance; the figure illustrates the case where ancillary evidence (E2) is favorable to the credibility of the source of evidence (E1), thereby increasing the force of E1. Evidence may also be convergent when E1 and E2 provide evidence of the occurrence of different events, but those events, together, favor a common subsequent inference. The enhancing contribution
(i1) to (i2) is indicated by the dashed arrow.
Redundant. Multiple items of evidence (E1, E2) that redundantly lead to a common inference (i1) can also diminish the force of each other in two basic ways. Corroborative redundancy occurs when two or more sources supply identical evidence of a common event inference (i1). If one source is perfectly credible, the redundant source does not contribute inferential force; if both have imperfect credibility, one may diminish the force of the other to avoid double counting the force of the redundant evidence. Cumulative redundancy occurs when multiple items of evidence (E1, E2), though inferring different intermediate hypotheses (i1,i2), respectively, lead to a common hypothesis (i3) farther up the reasoning chain. This redundant contribution to (i3), indicated by the dashed arrow, necessarily reduces the contribution of inferential force from E2.
Dissonance. Dissonant evidence may be contradictory when items of evidence E1 and E2 report, mutually exclusively, that the event E did occur and did not occur, respectively. Conflicting evidence, on the other hand, occurs when E1and E2 report two separate events i1 and i2 (both of which may have occurred, but not jointly), but these events favor mutually exclusive hypotheses at i3.
The graph moves from bottom to top in the following sequence:
- Direct evidence at the bottom;
- Evidence credibility inferences are the first row above evidence, infer- ring the veracity, objectivity, and sensitivity of the source of evidence;
- Relevance inferences move from credibility-conditioned evidence through a chain of inferences toward final hypothesis;
- The final hypothesis is at the top.
Some may wonder why such rigor is employed for such a simple argument. This relatively simple example illustrates the level of inferential detail required to formally model even the simplest of arguments. It also illustrates the real problem faced by the analyst in dealing with the nuances of redundant and conflicting evidence. Most significantly, the example illustrates the degree of care required to accurately represent arguments to permit machine-automated reasoning about all-source analytic problems.
We can see how this simple model demands the explicit representation of often-hidden assumptions, every item of evidence, the entire sequence of inferences, and the structure of relationships that leads to our conclusion that H1 is true.
Inferential networks provide a logical structure upon which quantified calculations may be performed to compute values of inferential force of evidence and the combined contribution of all evidence toward the final hypothesis.
6.7 Evaluating Competing Hypotheses
Heuer’s research indicated that the single most important technique to over- come cognitive shortcomings is to apply a systematic analytic process that allows objective comparison of alternative hypotheses
“The simultaneous evaluation of multiple, competing hypotheses entails far greater cognitive strain than examining a single, most-likely hypothesis”
Inferential networks are useful at the detail level, where evidence is rich and the ACH approach is useful at the higher levels of abstraction and where evidence is sparse. Networks are valuable for automated computation; ACH is valuable for collaborative analytic reasoning, presentation, and explanation. The ACH approach provides a methodology for the concurrent competition of multiple explanations, rather than the focus on the currently most plausible.
The ACH structure approach described by Heuer uses a matrix to organize and describe the relationship between evidence and alternative hypotheses. The sequence of the analysis-synthesis process (Figure 6.11) includes:
- Hypothesis synthesis. A multidisciplinary team of analysts creates a set of feasible hypotheses, derived from imaginative consideration of all possibilities before constructing a complete set that merits detailed consideration.
- Evidence analysis. Available data is reviewed to locate relevant evidence and inferences that can be assigned to support or refute the hypotheses. Explicitly identify the assumptions regarding evidence and the arguments of inference. Following the processes described in the last chapter, list the evidence-argument pairs (or chains of inference) and identify, for each, the intrinsic value of its contribution and the potential for being subject to denial or deception (D&D).
- Matrix synthesis. Construct an ACH matrix that relates evidence- inference to the hypotheses defined in step 1.
- Matrix analysis. Assess the diagnosticity (the significance or diagnostic value of the contribution of each component of evidence and related inferences) of each evidence-inference component to each hypothesis. This process proceeds for each item of evidence-inference across the rows, considering how each item may contribute to each hypothesis. An entry may be supporting (consistent with), refuting (inconsistent with), or irrelevant (not applicable) to a hypothesis; a contribution notation (e.g., +, –, or N/A, respectively) is marked within the cell. Where possible, annotate the likelihood (or probability) that this evi- dence would be observed if the hypothesis is true. Note that the diagnostic significance of an item of evidence is reduced as it is consistent with multiple hypotheses; it has no diagnostic contribution when it supports, to any degree, all hypotheses.
- Matrix synthesis (refinement). Evidence assignments are refined, eliminating evidence and inferences that have no diagnostic value.
- Hypotheses analysis. The analyst now proceeds to evaluate the likelihood of each hypothesis, by evaluating entries down the columns. The likelihood of each hypothesis is estimated by the characteristics of supporting and refuting evidence (as described in the last chapter). Inconsistencies and gaps in expected evidence provide a basis for retasking; a small but high-confidence item that refutes the preponderance of expected evidence may be a significant indicator of deception. The analyst also assesses the sensitivity of the likely hypothesis to contributing assumptions, evidence, and the inferences; this sensitivity must be reported with conclusions and the consequences if any of these items are in error. This process may lead to retasking of collectors to acquire more data to sup- port or refute hypotheses and to reduce the sensitivity of a conclusion.
- Decision synthesis (judgment). Reporting the analytic judgment requires the description of all of the alternatives (not just the most likely), the assumptions, evidence, and inferential chains. The report must also describe the gaps, inconsistencies, and their consequences on judgments. The analyst must also specify what should be done to provide an update on the situation and what indictors might point to significant changes in current judgments.
Notice that the ACH approach deliberately focuses the analyst’s attention on the contribution, significance, and relationships of evidence to hypotheses, rather than on building a case for any one hypothesis. The analytic emphasis is, first, on evidence and inference across the rows, before evaluating hypotheses, down the columns.
The stages of the structured analysis-synthesis methodology (Figure 6.12) are summarized in the following list:
- Organize. A data mining tool (described in Chapter 8, Section 8.2.2) automatically clusters related data sets by identifying linkages (relation- ships) across the different data types. These linked clusters are visualized using link-clustering tools used to visualize clusters and linkages to allow the analyst to consider the meaningfulness of data links and dis- cover potentially relevant relationships in the real world.
- Conceptualize. The linked data is translated from the abstract relation- ship space to diagrams in the temporal and spatial domains to assess real-world implications of the relationships. These temporal and spatial models allow the analyst to conceptualize alternative explanations that will become working hypotheses. Analysis in the time domain considers the implications of sequence, frequency, and causality, while the spatial domain considers the relative location of entities and events.
- Hypothesize. The analyst synthesizes hypotheses, structuring evidence and inferences into alternative arguments that can be evaluated using the method of alternative competing hypotheses. In the course of this process, the analyst may return to explore the database and linkage diagrams further to support or refute the working hypotheses.
6.8 Countering Denial and Deception
Because the targets of intelligence are usually high-value subjects (e.g., intentions, plans, personnel, weapons or products, facilities, or processes), they are generally protected by some level of secrecy to prevent observation. The means of providing this secrecy generally includes two components:
- Denial. Information about the existence, characteristics, or state of a target is denied to the observer by methods of concealment. Camouflage of military vehicles, emission control (EMCON), operational security (OPSEC), and encryption of e-mail messages are common examples of denial, also referred to as dissimulation (hiding the real).
- Deception. Deception is the insertion of false information, or simulation (showing the false), with the intent to distort the perception of the observer. The deception can include misdirection (m-type) deception to reduce ambiguity and direct the observer to a simulation—away from the truth—or ambiguity (a-type) deception, which simulates effects to increase the observer’s ambiguity or understanding about the truth
D&D methods are used independently or in concert to distract or disrupt the intelligence analyst, introducing distortions in the collection channels, ambiguity in the analytic process, errors in the resulting intelligence product, and misjudgment in decisions based on the product. Ultimately, this will lead to distrust of the intelligence product by the decision maker or consumer. Strategic D&D poses an increasing threat to the analyst, as an increasing number of channels for D&D are available to deceivers. Six distinct categories of strategic D&D operations have different target audiences, means of implementation, and objectives.
Propaganda or psychological operations (PSYOP) target a general population using several approaches. White propaganda openly acknowledges the source of the information, gray propaganda uses undeclared sources. Black propaganda purports to originate from a source other its actual sponsor, protecting the true source (e.g., clandestine radio and Internet broadcast, independent organizations, or agents of influence. Coordinated white, gray, and black propaganda efforts were strategically conducted by the Soviet Union throughout the Cold War as active measures of disinformation
Leadership deception targets leadership or intelligence consumers, attempting to bypass the intelligence process by appealing directly to the intelligence consumer via other channels. Commercial news channels, untrustworthy diplomatic channels, suborned media, and personal relationships can be exploited to deliver deception messages to leadership (before intelligence can offer D&D cautions) in an effort to establish mindsets in decision makers.
Intelligence deception specifically targets intelligence collectors (technical sensors, communications interceptors, and humans) and subsequently analysts by combining denial of the target data and by introducing false data to disrupt, distract, or deceive the collection or analysis processes (or both processes). The objective is to direct the attention of the sensor or the analyst away from a correct knowledge of a specific target.
Denial operations by means of OPSEC seek to deny access to true intentions and capabilities by minimizing the signatures of entities and activities.
Two primary categories of countermeasures for intelligence deception must be orchestrated to counter either the simple deception of a parlor magician or the complex intelligence deception program of a rogue nation-state. Both collection and analysis measures are required to provide the careful observation and critical thinking necessary to avoid deception. Improvements in collection can provide broader and more accurate coverage, even limited penetration of some covers.
The problem of mitigating intelligence surprise, therefore, must be addressed by considering both large numbers of models or hypotheses (analysis) and large sets of data (collection, storage, and analysis)
In his classic treatise, Strategem, Barton Whaley exhaustively studied over 100 historical D&D efforts and concluded, “Indeed, this is the general finding of my study—that is, the deceiver is almost always successful regardless of the sophistication of his victim in the same art. On the face of it, this seems an intolerable conclusion, one offending common sense. Yet it is the irrefutable conclusion of historical evidence”
The components of a rigorous counter D&D methodology, then, include the estimate of the adversary’s D&D plan as an intelligence subject (target) and the analysis of specific D&D hypotheses as alternatives. Incorporating this process within the ACH process described earlier amounts to assuring that reasonable and feasible D&D hypotheses (for which there may be no evidence to induce a hypothesis) are explicitly considered as alternatives.
two active searches for evidence to support, refute, or refine the D&D hypotheses [44]:
- Reconstructive inference. This deductive process seeks to detect the presence of spurious signals (Harris call these sprignals) that are indicators of D&D—the faint evidence predicted by conjectured D&D plans. Such sprignals can be strong evidence confirming hypothesis A (the simulation), weak contradictory evidence of hypothesis C (leakage from the adversary’s dissimulation effort), or missing evidence that should be present if hypothesis A were true.
- Incongruity testing. This process searches for inconsistencies in the data and inductively generates alternative explanations that attribute the incongruities to D&D (i.e., D&D explains the incongruity of evidence for more than one reality in simultaneous existence).
These processes should be a part of any rigorous alternative hypothesis process, developing evidence for potential D&D hypotheses while refining the estimate of the adversaries’ D&D intents, plans, and capabilities. The processes also focus attention on special collection tasking to support, refute, or refine current D&D hypotheses being entertained.
- Summary
Central to the intelligence cycle, analysis-synthesis requires the integration of human skills and automation to provide description, explanation, and prediction with explicit and quantified judgments that include alternatives, missing evidence, and dissenting views carefully explained. The challenge of discovering the hidden, forecasting the future, and warning of the unexpected cannot be performed with infallibility, yet expectations remain high for the analytic com- munity.
The practical implementation of collaborative analysis-synthesis requires a range of tools to coordinate the process within the larger intelligence cycle, augment the analytic team with reasoning and sensemaking support, overcome human cognitive shortcomings, and counter adversarial D&D.
7
Knowledge Internalization and Externalization
The process of conducting knowledge transactions between humans and computing machines occurs at the intersection between tacit and explicit knowledge, between human reasoning and sensemaking, and the explicit computation of automation. The processes of externalization (tacit-to-explicit transactions) and internalization (explicit-to-tacit transactions) of knowledge, however, are not just interfaces between humans and machines; more properly, the intersection is between human thought, symbolic representations of thought, and the observed world.
7.1 Externalization and Internalization in the Intelligence Workflow
The knowledge-creating spiral described in Chapter 3 introduced the four phases of knowledge creation.
Externalization
Following social interactions with collaborating analysts, an analyst begins to explicitly frame the problem. The process includes the decomposition of the intelligence problem into component parts (as described in Section 2.2) and explicit articulation of essential elements of information required to solve the problem. The tacit-to-explicit transfer includes the explicit listing of these essential elements of information needed, candidate sources of data, the creation of searches for relevant SMEs, and the initiation of queries for relevant knowledge within current holdings and collected all-source data. The primary tools to interact with all-source holdings are query and retrieval tools that search and retrieve information for assessment of relevance by the analyst.
Combination
This explicit-explicit transfer process correlates and combines the collected data in two ways:
- Interactive analytic tools. The analyst uses a wide variety of analytic tools to compare and combine data elements to identify relationships and marshal evidence against hypotheses.
- Automated data fusion and mining services. Automated data combination services also process high-volume data to bring detections of known patterns and discoveries of “interesting” patterns to the attention of the analyst.
Internalization
The analyst integrates the results of combination in two domains: external hypotheses (explicit models and simulations) and decision models (like the alter- native competing hypothesis decision model introduced in the last chapter) are formed to explicitly structure the rationale between hypotheses, and internally, the analyst develops tacit experience with the structured evidence, hypotheses, and decision alternatives.
Services in the data tier capture incoming data from processing pipelines (e.g., imagery and signals producers), reporting sources (news services, intelligence reporting sources), and open Internet sources being monitored. Content appropriate for immediate processing and production, such as news alerts, indications, and warning events, and critical change data are routed to the operational storage for immediate processing. All data are indexed, transformed, and loaded into the long-term data warehouse or into specialized data stores (e.g., imagery, video, or technical databases). The intelligence services tier includes six basic service categories:
- Operational processing. Information filtered for near-real-time criticality are processed to extract and tag content, correlate and combine with related content, and provide updates to operational watch officers. This path applies the automated processes of data fusion and data mining to provide near-real-time indicators, tracks, metrics, and situation summaries.
- Indexing, query, and retrieval. Analysts use these services to access the cumulating holdings by both automated subscriptions for topics of interest to be pushed to the user upon receipt and interactive query and retrieval of holdings.
- Cognitive (analytic) services. The analysis-synthesis and decision- making processes described in Chapters 5 and 6 are supported by cognitive services (thinking-support tools).
- Collaboration services. These services, described in Chapter 4, allow synchronous and asynchronous collaboration between analytic team members.
- Digital production services. Analyst-generated and automatically created dynamic products are produced and distributed to consumers based on their specified preferences.
- Workflow management. The workflow is managed across all tiers to monitor the flow from data to product, to monitor resource utilization, to assess satisfaction of current priority intelligence requirements, and to manage collaborating workgroups.
7.2 Storage, Query, and Retrieval Services
At the center of the enterprise is the knowledge base, which stores explicit knowledge and provides the means to access that knowledge to create new knowledge.
7.2.1 Data Storage
Intelligence organizations receive a continuous stream of data from their own tasked technical sensors and human sources, as well as from tasked collections of data from open sources. One example might be Web spiders that are tasked to monitor Internet sites for new content (e.g., foreign news services), then to collect, analyze, and index the data for storage. The storage issues posed by the continual collection of high-volume data are numerous:
Diversity. All-source intelligence systems require large numbers of inde- pendent data stores for imagery, text, video, geospatial, and special technical data types. These data types are served by an equally high number of specialized applications (e.g., image and geospatial analysis and signal analysis).
Legacy. Storage system designers are confronted with the integration of existing (legacy) and new storage systems; this requires the integration of diverse logical and physical data types.
Federated retrieval and analysis. The analyst needs retrieval, application, and analysis capabilities that span across the entire storage system.
7.2.2 Information Retrieval
Information retrieval (IR) is formally defined as “… [the] actions, methods and procedures for recovering stored data to provide information on a given subject” [2]. Two approaches to query and retrieve stored data or text are required in most intelligence applications:
- Data query and retrieval is performed on structured data stored in relational database applications. Imagery, signals, and MASINT data are generally structured and stored in structured formats that employ structured query language (SQL) and SQL extensions for a wide variety of databases (e.g., Access, IBM DB2 and Informix, Microsoft SQL Server, Oracle, and Sybase). SQL allows the user to retrieve data by context (e.g., by location in data tables, such as date of occurrence) or by content (e.g., retrieve all record with a defined set of values).
- Text query and retrieval is performed on both structured and unstructured text in multiple languages by a variety of natural language search engines to locate text containing specific words, phrases, or general concepts within a specified context.
Data query methods are employed within the technical data processing pipelines (IMINT, SIGINT, and MASINT). The results of these analyses are then described by analysts in structured or unstructured text in an analytic database for subsequent retrieval by text query methods.
Moldovan and Harabagiu have defined a five-level taxonomy of Q&A systems (Table 7.1) that range from the common keyword search engine that searches for relevant content (class 1) to reasoning systems that solve complex natural language problems (class 5) [3]. Each level requires increasing scope of knowledge, depth of linguistic understanding, and sophistication of reasoning to translate relevant knowledge to an answer or solution.
The first two levels of current search capabilities locate and return relevant content based on keywords (content) or the relationships between clusters of words in the text (concept).
While class 1 capabilities only match and return content that matches the query, class 2 capabilities integrate the relevant data into a simple response to the question.
Class 3 capabilities require the retrieval of relevant knowledge and reasoning about that knowledge to deduce answers to queries, even when the specific answer is not explicitly stated in the knowledge base. This capability requires the ability to both reason from general knowledge to specific answers and provide rationale for those answers to the user.
Class 4 and 5 capabilities represent advanced capabilities, which require robust knowledge bases that contain sophisticated knowledge representation (assertions and axioms) and reasoning (mathematical calculation, logical inference, and temporal reasoning).
7.3 Cognitive (Analytic Tool) Services
Cognitive services support the analyst in the process of interactively analyzing data, synthesizing hypotheses, and making decisions (choosing among alternatives). These interactive services support the analysis-synthesis activities described in Chapters 5 and 6. Alternatively called thinking tools, analytics, knowledge discovery, or analytic tools, these services enable the human to trans- form and view data, create and model hypotheses, and compare alternative hypotheses and consequences of decisions.
- Exploration tools allow the analyst to interact with raw or processed multi- media (text, numerical data, imagery, video, or audio) to locate and organize content relevant to an intelligence problem. These tools provide the ability to search and navigate large volumes of source data; they also provide automated taxonomies of clustered data and summaries of individual documents. The information retrieval functions described in the last subsection are within this category. The product of exploration is generally a relevant set of data/text organized and metadata tagged for subsequent analysis. The analyst may drill down to detail from the lists and summaries to view the full content of all items identified as relevant.
- Reasoning tools support the analyst in the process of correlating, comparing, and combining data across all of the relevant sources. These tools support a wide variety of specific intelligence target analyses:
- Temporal analysis. This is the creation of timelines of events, dynamic relationships, event sequences, and temporal transactions (e.g., electronic, financial, or communication).
- Link analysis. This involves automated exploration of relationships among large numbers of different types of objects (entities and events).
- Spatial analysis. This is the registration and layering of 3D data sets and creation of 3D static and dynamic models from all-source evidence. These capabilities are often met by commercial geospatial information system and computer-aided design (CAD) software.
- Functional analysis. This is the analysis of processes and expected observables (e.g., manufacturing, business, and military operations, social networks and organizational analysis, and traffic analysis).
These tools aid the analyst in five key analytic tasks:
- Correlation: detection and structuring of relationships or linkages between different entities or events in time, space, function, or interaction; association of different reports or content related to a common entity or event;
- Combination: logical, functional, or mathematical joining of related evidence to synthesize a structured argument, process, or quantitative estimate;
- Anomaly detection: detection of differences between expected (or modeled) characteristics of a target;
- Change detection: detection of changes in a target over time—the changes may include spectral, spatial, or other phenomenological changes;
- Construction: synthesis of a model or simulation of entities or events and their interactions based upon evidence and conjecture.
Sensemaking tools support the exploration, evaluation, and refinement of alternative hypotheses and explanations of the data. Argumentation structuring, modeling, and simulation tools in this category allow analysts to be immersed in their hypotheses and share explicit representations with other collaborators. This immersion process allows the analytic team to create shared meaning as they experience the alternative explanations.
Decision support (judgment) tools assist analytic decision making by explicitly estimating and comparing the consequences and relative merits of alternative decisions.
These tools include models and simulations that permit the analyst to create and evaluate alternative COAs and weigh the decision alternatives against objective decision criteria. Decision support systems (DSSs) apply the principles of probability to express uncertainty and decision theory to create and assess attributes of decision alternatives and quantify the relative utility of alternatives. Normative, or decision-analytic DSSs, aid the analyst in structuring the decision problem and in computing the many factors that lead from alternatives to quantifiable attributes and resulting utilities. These tools often relate attributes to utility by influence diagrams and compute utilities (and associated uncertainties) using Bayes networks.
The tools progressively move from data as the object of analysis (for exploration) to clusters of related information, to hypotheses, and finally on to decisions, or analytic judgments.
intelligence workflow management software can provide a means to organize the process by providing the following functions:
- Requirements and progress tracking: maintains list of current intelligence requirements, monitors tasking to meet the requirements, links evidence and hypotheses to those requirements, tracks progress toward meeting requirements, and audits results;
- Relevant data linking: maintains ontology of subjects relevant to the intelligence requirements and their relationships and maintains a data- base of all relevant data (evidence);
- Collaboration directory: automatically locates and updates a directory of relevant subject matter experts as the problem topic develops.
In this example, an intelligence consumer has requested specific intelligence on a drug cartel named “Zehga” to support counter-drug activities in a foreign country. The sequence of one analyst’s use of tools in the example include:
- The process begins with synchronous collaboration with other analysts to discuss the intelligence target (Zehga) and the intelligence requirements to understand the cartel organization structure, operations, and finances. The analyst creates a peer-to-peer collaborative workspace that contains requirements, essential elements of information (EEIs) needed, current intelligence, and a directory of team members before inviting additional counter-drug subject matter experts to the shared space.
- The analyst opens a workflow management tool to record requirements, key concepts and keywords, and team members; the analyst will link results to the tool to track progress in delivering finished intelligence. The tool is also used to request special tasking from technical collectors (e.g., wiretaps) and field offices.
- Once the problem has been externalized in terms of requirements and EEIs needed, the sources and databases to be searched are selected (e.g., country cables, COMINT, and foreign news feeds and archives). Key concepts and keywords are entered into IR tools; these tools search current holdings and external sources, retrieving relevant multi- media content. The analyst also sets up monitor parameters to continually check certain sources (e.g., field office cables and foreign news sites) for changes or detections of relevant topics; when detected, the analyst will be alerted to the availability of new information.
- The IR tools also create a taxonomy of the collected data sets, structuring the catch into five major categories: Zehga organization (personnel), events, finances, locations, and activities. The taxonomy breaks each category into subcategories of clusters of related content. Documents located in open-source foreign news reports are translated into English, and all documents are summarized into 55-word abstracts.
- The analyst views the taxonomy and drills down to summaries, then views the full content of the most critical items to the investigation. Selected items (or hyperlinks) are saved to the shared knowledge base for a local repository relevant to the investigation.
- The retrieved catch is analyzed with text mining tools that discover and list the multidimensional associations (linkages or relationships) between entities (people, phone numbers, bank account numbers, and addresses) and events (meetings, deliveries, and crimes).
- The linked lists are displayed on a link-analysis tool to allow the analyst to manipulate and view the complex web of relationships between people, communications, finances, and the time sequence of activities. From these network visuals, the analyst begins discovering the Zehga organizational structure, relationships to other drug cartels and financial institutions, and the timeline of explosive growth of the cartel’s influence.
- The analyst internalizes these discoveries by synthesizing a Zehga organization structure and associated financial model, filling in the gaps with conjectures that result in three competing hypotheses: a centralized model, a federated model, and a loose network model. These models are created using a standard financial spreadsheet and a net- work relationship visualization tool. The process of creating these hypotheses causes the analyst to frequently return to the knowledge base to review retrieved data, to issue refined queries to fill in the gaps, and to further review the results of link analyses. The model synthesis process causes the analyst to internalize impressions of confidence, uncertainty, and ambiguity in the evidence, and the implications of potential missing or negative evidence. Here, the analyst ponders the potential for denial and deception tactics and the expected subtle “sprignals” that might appear in the data.
- An ACH matrix is created to compare the accrued evidence and argumentation structures supporting each of the competing models. At any time, this matrix and the associated organizational-financial models summarize the status of the intelligence process; these may be posted on the collaboration space and used to identify progress on the work- flow management tool.
- The analyst further internalizes the situation by applying a decision sup- port tool to consider the consequences or implications of each model on counter-drug policy courses of action relative to the Zehga cartel.
- Once the analyst has reached a level of confidence to make objective analytic judgments about hypotheses, results can be digitally published to the requesting consumers and to the collaborative workgroup to begin socialization—and another cycle to further refine the results. (The next section describes the digital publication process.)
Commercial tool suites such as Wincite’s eWincite, Wisdom Builder’s Wisdombuilder, and Cipher’s Knowledge. Works similarly integrate text-based tools to support the competitive intelligence analysis.
Tacit capture and collaborative filtering monitors the activities of all users on the network and uses statistical clustering methods to identify the emergent clusters of interest that indicate communities of common practice. Such filtering could identify and alert these two analysts to other ana- lysts that are converging on a common suspect from other directions (e.g., money laundering and drug trafficking).
7.4 Intelligence Production, Dissemination, and Portals
The externalization-to-internalization workflow results in the production of digital intelligence content suitable for socialization (collaboration) across users and consumers. This production and dissemination of intelligence from KM enterprises has transitioned from static, hardcopy reports to dynamically linked digital softcopy products presented on portals.
Digital production processes employ content technologies that index, structure, and integrate fragmented components of content into deliverable products. In the intelligence context, content includes:
- Structured numerical data (imagery, relational database queries) and text [e.g., extensible markup language (XML)-formatted documents] as well as unstructured information (e.g., audio, video, text, and HTML content from external sources);
- Internally or externally created information;
- Formally created information (e.g., cables, reports, and imagery or signals analyses) as well as informal or ad hoc information (e.g., e-mail, and collaboration exchanges);
- Static or active (e.g., dynamic video or even interactive applets) content.
The key to dynamic assembly is the creation and translation of all content to a form that is understood by the KM system. While most intelligence data is transactional and structured (e.g., imagery, signals, MASINT), intelligence and open-source documents are unstructured. While the volume of open-source content available on Internet and closed-source intelligence content grows exponentially, the content remains largely unstructured.
Content technology pro- vides the capability to transform all-sources to a common structure for dynamic integration and personalized publication. The XML offers a method of embed- ding content descriptions by tagging each component with descriptive information that allows automated assembly and distribution of multimedia content
Intelligence standards being developed include an intelligence information markup language (ICML) specification for intelligence reporting and metadata standards for security, specifying digital signatures (XML-DSig), security/encryption (XML-Sec), key management (XML-KMS), and information security marking (XML-ISM) [12]. Such tagging makes the content interoperable; it can be reused and automatically integrated in numerous ways:
- Numerical data may be correlated and combined.
- Text may be assembled into a complete report (e.g., target abstract, tar- getpart1, targetpart2, …, related targets, most recent photo, threat summary, assessment).
- Various formats may be constructed from a single collection of contents to suit unique consumer needs (e.g., portal target summary format, personal digital assistant format, or pilot’s cockpit target folder format).
a document object model (DOM) tree can be created from the integrated result to transform the result into a variety of formats (e.g., HTML or PDF) for digital publication.
The analysis and single-source publishing architecture adopted by the U.S. Navy Command 21 K-Web (Figure 7.7) illustrates a highly automated digital production process for intelligence and command applications [14]. The production workflow in the figure includes the processing, analysis, and dissemination steps of the intelligence cycle:
- Content collection and creation (processing and analysis). Both quantitative technical data and unstructured text are received, and content is extracted and tagged for subsequent processing. This process is applied to legacy data (e.g., IMINT and SIGINT reports), structured intelligence message traffic, and unstructured sources (e.g., news reports and intelligence e-mail). Domain experts may support the process by creating metadata in a predefined XML metadata format to append to audio, video, or other nontext sources. Metadata includes source, pedigree, time of collection, and format information. New content created by analysts is entered in standard XML DTD templates.
- Content applications. XML-tagged content is entered in the data mart, where data applications recognize, correlate, consolidate, and summarize content across the incoming components. A correlation agent may, for example, correlate all content relative to a new event or entity and pass the content on to a consolidation agent to index the components for subsequent integration into an event or target report. The data (and text) fusion and mining functions described in the next chapter are performed here.
- Content management-product creation (production). Product templates dictate the aggregation of content into standard intelligence products: warnings, current intelligence, situation updates, and target status. These composite XML-tagged products are returned to the data mart.
- Content publication and distribution. Intelligence products are personalized in terms of both style (presentation formats) and distribution (to users with an interest in the product). Users may explicitly define their areas of interests, or the automated system may monitor user activities (through queries, collaborative discussion topics, or folder names maintained) to implicitly estimate areas of interest to create a user’s personal profile. Presentation agents choose from the style library and user profiles to create distribution lists for content to be delivered via e-mail, pushed to users’ custom portals, or stored in the data mart for subsequent retrieval. The process of content syndication applies an information and content exchange (ICE) standard to allow a single product to be delivered in multiple styles and to provide automatic content update across all users.
The user’s single entry point is a personalized portal (or Web portal) that provides an organized entry into the information available on the intelligence enterprise.
7.5 Human-Machine Information Transactions and Interfaces
In all of the services and tools described in the previous sections, the intelligence analyst interacts with explicitly collected data, applying his or her own tacit knowledge about the domain of interest to create estimates, descriptions, expla- nations, and predictions based on collected data. This interaction between the analyst and KM systems requires efficient interfaces to conduct the transaction between the analyst and machine.
7.5.1 Information Visualization
Edward Tufte introduced his widely read text Envisioning Information with the prescient observation that, “Even though we navigate daily through a perceptual world of three dimensions and reason occasionally about higher dimensional arena with mathematical ease, the world portrayed on our information displays is caught up in the two-dimensionality of the flatlands of paper and video screen”. Indeed, intelligence organizations are continually seeking technologies that will allow analysts to escape from this flatland.
The essence of visualization is to provide multidimensional information to the analyst in a form that allows immediate understanding by this visual form of thinking.
A wide range of visualization methods are employed in analysis (Table 7.6) to allow the user to:
- Perceive patterns and rapidly grasp the essence of large complex (multi-dimensional) information spaces, then navigate or rapidly browse through the space to explore its structure and contents;
- Manipulate the information and visual dimensions to identify clusters of associated data, patterns of linkages and relationships, trends (temporal behavior), and outlying data;
- Combine the information by registering, mathematically or logically jointing (fusing), or overlaying.
7.5.2 Analyst-Agent Interaction
Intelligent software agents tailored to support knowledge workers are being developed to provide autonomous automated support in the information retrieval and exploration tasks introduced throughout this chapter. These collaborative information agents, operating in multiagent networks, provide the
potential to amplify the analyst’s exploration of large bodies of data, as they search, organize, structure, and reason about findings before reporting results. Information agents are being developed to perform a wide variety of functions, as an autonomous collaborating community under the direction of a human analyst, including:
- Personal information agents (PIMs) coordinate an analyst’s searches and organize bookmarks to relevant information; like a team of librarians, the PIMs collect, filter, and recommend relevant materials for the analyst.
- Brokering agents mediate the flow of information between users and sources (databases, external sources, collection processors); they can also act as sentinels to monitor sources and alert users to changes or the availability of new information.
- Planning agents accept requirements and create plans to coordinate agents and task resources to meet user goals.
agents also offer the promise of a means of interaction with the analyst that emulates face- to-face conversation, and will ultimately allow information agents to collaborate as (near) peers with individuals and teams of human analysts. These interactive agents (or avatars) will track the analyst (or analytic team) activities and needs to conduct dialogue with the analysts—in terms of the semantic concepts familiar to the topic of interest—to contribute the following kinds of functions:
- Agent conversationalists that carry on dialogue to provide high- bandwidth interactions that include multimodal input from the analyst (e.g., spoken natural language, keyboard entries, and gestures and gaze) and multimodal replies (e.g., text, speech, and graphics). Such conversationalists will increase “discussions” about concepts, relevant data, and possible hypotheses [23].
- Agent observers that monitor analyst activity, attention, intention, and task progress to converse about suggested alternatives, potentials for denial and deception, or warnings that the analyst’s actions imply cognitive shortcomings (discussed in Chapter 6) may be influencing the analysis process.
- Agent contributors that will enter into collaborative discussions to interject alternatives, suggestions, or relevant data.
The integration of collaborating information agents and information visualization technologies holds the promise of more efficient means of helping analysts find and focus on relevant information, but these technologies require greater maturity to manage uncertainty, dynamically adapt to the changing ana- lytic context, and understand the analyst’s intentions.
7.6 Summary
The analytic workflow requires a constant interaction between the cognitive and visual-perceptive processes in the analyst’s mind and the explicit representations of knowledge in the intelligence enterprise.
8
Explicit Knowledge Capture and Combination
In the last chapter, we introduced analytic tools that allow the intelligence analyst to interactively correlate, compare, and combine numerical data and text to discover clusters and relationships among events and entities within large databases. These interactive combination tools are considered to be goal-driven processes: the analyst is driven by a goal to seek solutions within the database, and the reasoning process is interactive with the analyst and machine in a common reasoning loop. This chapter focuses on the largely automated combination processes that tend to be data driven: as data continuously arrives from intelligence sources, the incoming data drives a largely automated process that continually detects, identifies, and tracks emerging events of interest to the user. These parallel goal-driven and data-driven processes were depicted as complementary combination processes in the last chapter
In all cases, the combination processes help sources to cross-cue each other, locate and identify target events and entities, detect anomalies and changes, and track dynamic targets.
8.1 Explicit Capture, Representation, and Automated Reasoning
The term combination introduced by Nonaka and Takeuchi in the knowledge-creation spiral is an abstraction to describe the many functions that are performed to create knowledge, such as correlation, association, reasoning, inference, and decision (judgment). This process requires the explicit representation of knowledge; in the intelligence application this includes knowledge about the world (e.g., incoming source information), knowledge of the intelligence domain (e.g., characteristics of specific weapons of mass destruction and their production and deployment processes), and the more general procedural knowledge about reasoning.
The DARPA Rapid Knowledge Formation (RKF) project and its predecessor, the High-Performance Knowledge Base project, represent ambitious research aimed at providing a robust explicit knowledge capture, representation, and combination (reasoning) capability targeted toward the intelligence analysis application [1]. The projects focused on developing the tools to create and manage shared, reusable knowledge bases on specific intelligence domains (e.g., biological weapons subjects); the goal is to enable creation of over one million axioms of knowledge per year by collaborating teams of domain experts. Such a knowledge base requires a computational ontology—an explicit specification that defines a shared conceptualization of reality that can be used across all processes.
The challenge is to encode knowledge through the instantiation and assembly of generic knowledge components that can be readily entered and understood by domain experts (appropriate semantics) and provide sufficient coverage to encompass an expert-level of understanding of the domain. The knowledge base must have fundamental knowledge of entities (things that are), events (things that happen), states (descriptions of stable event characteristics), and roles (entities in the context of events). It must also describe knowledge of the relationships between (e.g. cause, object of, part of, purpose of, or result of) and properties (e.g., color, shape, capability, and speed) of each of these.
8.2 Automated Combination
Two primary categories of the combination processes can be distinguished, based on their approach to inference; each is essential to intelligence processing and analysis.
The inductive process of data mining discovers previously unrecognized patterns in data (new knowledge about characteristics of an unknown pattern class) by searching for patterns (relationships in data) that are in some sense “interesting.” The discovered candidates are usually presented to human users for analysis and validation before being adopted as general cases [3].
The deductive process, data fusion, detects the presence of previously known patterns in many sources of data (new knowledge about the existence of a known pattern in the data). This is performed by searching for specific pattern templates in sensor data streams or databases to detect entities, events, and complex situations comprised of interconnected entities and events.
data sets used by these processes for knowledge creation are incomplete, dynamic, and contain data contaminated by noise. These factors make the following process characteristics apply:
- Pattern descriptions. Data mining seeks to induce general pattern descriptions (reference patterns, templates, or matched filters) to characterize data understood, while data fusion applies those descriptions to detect the presence of patterns in new data.
- Uncertainty in inferred knowledge. The data and reference patterns are uncertain, leading to uncertain beliefs or knowledge.
- Dynamic state of inferred knowledge. The process is sequential and inferred knowledge is dynamic, being refined as new data arrives.
- Use of domain knowledge. Knowledge about the domain (e.g., constraints, context) may be used in addition to collected raw intelligence data.
8.2.1 Data Fusion
Data fusion is an adaptive knowledge creation process in which diverse elements of similar or dissimilar observations (data) are aligned, correlated, and combined into organized and indexed sets (information), which are further assessed to model, understand, and explain (knowledge) the makeup and behavior of a domain under observation.
The data-fusion process seeks to explain an adversary (or uncooperative) intelligence target by abstracting the target and its observable phenomena into a causal or relationship model, then applying all-source observation to detect entities and events to estimate the properties of the model. Consider the levels of representation in the simple target-observer processes in Figure 8.2 [6]. The adversary leadership holds to goals and values that create motives; these motives, combined with beliefs (created by perception of the current situation), lead to intentions. These intentions lead to plans and responses to the current situation; from alternative plans, decisions are made that lead to commands for action. In a hierarchical military, or a networked terrorist organization, these commands flow to activities (communication, logistics, surveillance, and movements). Using the three domains of reality terminology introduced in Chapter 5, the motive-to-decision events occur in the adversary’s cognitive domain with no observable phenomena.
The data-fusion process uses observable evidence from both the symbolic and physical domains to infer the operations, communications, and even the intentions of the adversary.
The emerging concept of effects-based military operations (EBO) requires intelligence products that provide planners with the ability to model the various effects influencing a target that make up a complex system. Planners and opera- tors require intelligence products that integrate models of the adversary physical infrastructure, information networks, and leadership and decision making
The U.S. DoD JDL has established a formal process model of data fusion that decomposes the process into five basic levels of information-refining processes (based upon the concept of levels of information abstraction) [8]:
- Level 0: Data (or subobject) refinement. This is the correlation across signals or data (e.g., pixels and pulses) to recognize components of an object and the correlation of those components to recognize an object.
- Level 1: Object refinement. This is the correlation of all data to refine individual objects within the domain of observation. (The JDL model uses the term object to refer to real-world entities, however, the subject of interest may be a transient event in time as well.)
- Level 2: Situation refinement. This is the correlation of all objects (information) within the domain to assess the current situation.
- Level 3: Impact refinement. This is the correlation of the current situation with environmental and other constraints to project the meaning of the situation (knowledge). The meaning of the situation refers to its implications to the user: threat, opportunity, change, or consequence.
- Level 4: Process refinement. This is the continual adaptation of the fusion process to optimize the delivery of knowledge against a defined mission objective.
8.2.1.1 Level 0: Data Refinement
Raw data from sensors may be calibrated, corrected for bias and gain errors, limited (thresholded), and filtered to remove systematic noise sources. Object detection may occur at this point—in individual sensors or across multiple sensors (so-called predetection fusion). The object-detection process forms observation reports that contain data elements such as observation identifier, time of measurement, measurement or decision data, decision, and uncertainty data.
8.2.1.2 Level 1: Object Refinement
Sensor and source reports are first aligned to a common spatial reference (e.g., a geographic coordinate system) and temporal reference (e.g., samples are propagated forward or backward to a common time.) These alignment transformations place the observations in a common time-space coordinate system to allow an association process to determine which observations from different sensors have their source in a common object. The association process uses a quantitative correlation metric to measure the relative similarity between observations. The typical correlation metric, C, takes on the following form:
n
c = ∑wi xi
i1=1
Where;
wi = weighting coefficient for attribute xi.
xi = ith correlation attribute metric.
The correlation metric may be used to make a hard decision (an association), choosing the most likely parings of observations, or a deferred decision, assigning more that one hypothetical paring and deferring a hard decision until more observations arrive. Once observations have been associated, two functions are performed on each associated set of measurements for common object:
- Tracking. For dynamic targets (vehicles or aircraft), the current state of the object is correlated with previously known targets to determine if the observation can update a model of an existing model (track). If the newly associated observations are determined to be updates to an existing track, the state estimation model for the track (e.g., a Kalman filter) is updated; otherwise, a new track is initiated.
- Identification. All associated observations are used to determine if the object identity can be classified to any one of several levels (e.g., friend/foe, vehicle class, vehicle type or model, or vehicle status or intent).
8.2.1.3 Level 2: Situation Refinement
All objects placed in space-time context in an information base are analyzed to detect relationships based on spatial or temporal characteristics. Aggregate sets of objects are detected by their coordinated behavior, dependencies, proximity, common point of origin, or other characteristics using correlation metrics with high-level attributes (e.g., spatial geometries or coordinated behavior). The synoptic understanding of all objects, in their space-time context, provides situation knowledge, or awareness.
8.2.1.4 Level 3: Impact (or Threat) Refinement
Situation knowledge is used to model and analyze feasible future behaviors of objects, groups, and environmental constraints to determine future possible out- comes. These outcomes, when compared with user objectives, provide an assessment of the implications of the current situation. Consider, for example, a simple counter-terrorism intelligence situation that is analyzed in the sequence in Figure 8.4.
8.2.1.5 Level 4: Process Refinement
This process provides feedback control of the collection and processing activities to achieve the intelligence requirements. At the top level, current knowledge (about the situation) is compared to the intelligence requirements required to achieve operational objectives to determine knowledge shortfalls. These shortfalls are parsed, downward, into information, then data needs, which direct the future acquisition of data (sensor management) and the control of internal processes. Processes may be refined, for example, to focus on certain areas of interest, object types, or groups. This forms the feedback loop of the data-fusion process.
8.2.2 Data Mining
Data mining is the process by which large sets of data (or text in the specific case of text mining) are cleansed and transformed into organized and indexed sets (information), which are then analyzed to discover hidden and implicit, but previously undefined, patterns. These patterns are reviewed by domain experts to determine if they reveal new understandings of the general structure and relationships (knowledge) in the data of a domain under observation.
The object of discovery is a pattern, which is defined as a statement in some language, L, that describes relationships in subset Fs of a set of data, F, such that:
- The statement holds with some certainty, c;
- The statement is simpler (in some sense) than the enumeration of all facts in Fs [13].
This is the inductive generalization process described in Chapter 5. Mined knowledge, then, is formally defined as a pattern that is interesting, according to some user-defined criterion, and certain to a user-defined measure of degree.
In application, the mining process is extended from explanations of limited data sets to more general applications (induction). In this example, a relationship pattern between three terrorist cells may be discovered that includes intercommunication, periodic travel to common cities, and correlated statements posted on the Internet.
Data mining (also called knowledge discovery) is distinguished from data fusion by two key characteristics:
- Inference method. Data fusion employs known patterns and deductive reasoning, while data mining searches for hidden patterns using inductive reasoning.
- Temporal perspective. The focus of data fusion is retrospective (determining current state based on past data), while data mining is both retrospective and prospective—focused on locating hidden patterns that may reveal predictive knowledge.
Beginning with sensors and sources, the data warehouse is populated with data, and successive functions move the data toward learned knowledge at the top. The sources, queries, and mining processes may be refined, similar to data fusion. The functional stages in the figure are described next.
- Data warehouse. Data from many sources are collected and indexed in the warehouse, initially in the native format of the source. One of the chief issues facing many mining operations is the reconciliation of diverse database formats that have different formats (e.g., field and record sizes and parameter scales), incompatible data definitions, and other differences. The warehouse collection process (flow in) may mediate between these input sources to transform the data before storing in common form [20].
- Data cleansing. The warehoused data must be inspected and cleansed to identify and correct or remove conflicts, incomplete sets, and incompatibilities common to combined databases. Cleansing may include several categories of checks:
- Uniformity checks verify the ranges of data, determine if sets exceed limits, and verify that formats versions are compatible.
- Completeness checks evaluate the internal consistency of data sets to ensure, for example, that aggregate values are consistent with individual data components (e.g., “verify that total sales is equal to sum of all sales regions, and that data for all sales regions is present”).
- Conformity checks exhaustively verify that each index and reference exists.
- Genealogy checks generate and check audit trails to primitive data to permit analysts to drill down from high-level information.
- Data selection and transformation. The types of data that will be used for mining are selected on the basis of relevance. For large operations, ini- tial mining may be performed on a small set, then extended to larger sets to check for the validity of abducted patterns. The selected data may then be transformed to organize all data into common dimensions and to add derived dimensions as necessary for analysis.
- Data mining operations. Mining operations may be performed in a supervised manner in which the analyst presents the operator with a selected set of training data, in which the analyst has manually determined the existence of pattern classes. Alternatively, the operation may proceed without supervision, performing an automated search for patterns. A number of techniques are available (Table 8.4), depending upon the type of data and search objectives (interesting pattern types).
- Discovery modeling. Prediction or classification models are synthesized to fit the data patterns detected. This is the proscriptive aspect of mining: modeling the historical data in the database (the past) to provide a model to predict the future. The model attempts to abduct a generalized description that explains discovered patterns of interest and, using statistical inference from larger volumes of data, seeks to induct generally applicable models. Simple extrapolation, time-series trends, com- plex linked relationships, and causal mathematical models are examples of models created.
- Visualization. The analyst uses visualization tools that allow discovery of interesting patterns in the data. The automated mining operations cue the operator to discovered patterns of interest (candidates), and the analyst then visualizes the pattern and verifies if, indeed, it contains new and useful knowledge. OLAP refers to the manual visualization process in which a data manipulation engine allows the analyst to create data “views” from the human perspective and to perform the following categories of functions:
- Multidimensional analysis of the data across dimensions, through relationships (e.g., command hierarchies and transaction networks) and in perspectives natural to the analyst (rather that inherent in the data);
- Transformation of the viewing dimensions or slicing of the multidimensional array to view a subset of interest;
- Drill down into the data from high levels of aggregation, downward into successively deeper levels of information;
- Reach through from information levels to the underlying raw data, including reaching beyond the information base, back to raw data by the audit trail generated in genealogy checking;
- Modeling of hypothetical explanations of the data, in terms of trend analysis, extrapolations.
- Refinement feedback. The analyst may refine the process, by adjusting the parameters that control the lower level processes, as well as requesting more or different data on which to focus the mining operations.
8.2.3 Integrated Data Fusion and Mining
In a practical intelligence application, the full reasoning process integrates the discovery processes of data mining with the detection processes of data fusion. This integration helps the analyst to coordinate learning about new signatures and patterns and apply that new knowledge, in the form of templates, to detect other cases of the situation. A general application of these integrated tools can support the search for nonliteral target signatures, the use of those learned and validated signatures to detect new targets [21]. (Nonliteral target signatures refer to those signatures that extend across many diverse observation domains and are not intuitive or apparent to analysts, but may be discovered only by deeper analysis of multidimensional data.)
The mining component searches the accumulated database of sensor data, with discovery processes focused on relationships that may have relevance to the nonliteral target sets. Discovered models (templates) of target objects or processes are then tested, refined, and verified using the data-fusion process. Finally, the data-fusion process applies the models deductively for knowledge detection in incoming sensor data streams.
8.3 Intelligence Modeling and Simulation
Modeling activities take place in externalization (as explicit models are formed to describe mental models), combination (as evidence is combined and compared with the model), and in internalization (as the analyst ponders the matches, mismatches, and incongruities between evidence and model).
While we have used the general term model to describe any abstract representation, we now distinguish here between two implementations made by the modeling and simulation (M&S) community. Models refer to physical, mathematical, or otherwise logical representations of systems, entities, phenomena, or processes, while simulations refer to those methods to implement models over time (i.e., a simulation is a time-dynamic model)
Models and simulations are inherently collaborative; their explicit representations (versus mental models) allow analytic teams to collectively assemble, and explore the accumulating knowledge that they represent. They support the analysis-synthesis process in multiple ways:
- Evidence marshaling. As described in Chapter 5, models and simulations provide the framework for which inference and evidence is assembled; they provide an audit trail of reasoning.
- Exploration. Models and simulations also provide a means for analysts to be immersed in the modeled situation, its structure, and dynamics. It is a tool for experimentation and exploration that provides deeper understanding to determine necessary confirming or falsifying evidence, to evaluate potential sensing measures, and to examine potential denial and deception effects.
- Dynamic process tracking. Simulations model the time-dynamic behavior of targets to forecast future behavior, compare with observations, and refine the behavior model over time. Dynamic models provide the potential for estimation, anticipation, forecasting, and even prediction (these words imply increasing accuracy and precision in their estimates of future behavior).
- Explanation. Finally, the models and simulations provide a tool for presenting alternative hypotheses, final judgments, and rationale.
chance favors the prepared prototype: models and simulations can and should be media to create and capture surprise and serendipity
The table (8.5) illustrates independent models and simulations in all three domains, however these domains can be coupled to create a robust model to explore how an adversary thinks (cognitive domain), transacts (e.g., finances, command, and intelligence flows), and acts (physical domain).
A recent study of the advanced methods required to support counter-terrorism analysis recommended the creation of scenarios using top-down synthesis (manual creation by domain experts and large-scale simulation) to create synthetic evidence for comparison with real evidence discovered by bottom-up data mining.
8.3.1 M&S for I&W
The challenge of I&W demands predictive analysis, where “the analyst is looking at something entirely new, a discontinuous phenomenon, an outcome that he or she has never seen before. Furthermore, the analyst only sees this new pat- tern emerge in bits and pieces”
The tools monitor world events to track the state and time-sequence of state transitions for comparison with indicators of stress. These analytic tools apply three methods to provide indicators to analysts:
- Structural indicator matching. Previously identified crisis patterns (statistical models) are matched to current conditions to seek indications in background conditions and long-term trends.
- Sequential tracking models. Simulations track the dynamics of events to compare temporal behavior with statistical conflict accelerators in cur- rent situations that indicate imminent crises.
- Complex behavior analysis. Simulations are used to support inductive exploration of the current situation, so the analyst can examine possible future scenarios to locate potential triggering events that may cause instability (though not in prior indicator models).
A general I&W system architecture (Figure 8.7), organized following the JDL data-fusion structure, accepts incoming news feed text reports of current situations and encodes the events into a common format (by human or automated coding). The event data is encoded into time-tagged actions (assault, kid- nap, flee, assassinate), proclamations (threaten, appeal, comment) and other pertinent events from relevant actors (governments, NGOs, terror groups). The level 1 fusion process correlates and combines similar reports to produce a single set of current events organized in time series for structural analysis of back- ground conditions and sequential analysis of behavioral trends by groups and interactions between groups. This statistical analysis is an automatic target-recognition process, comparing current state and trends with known clusters of unstable behaviors. The level 2 process correlates and aggregates individual events into larger patterns of behavior (situations). A dynamic simulation tracks the current situation (and is refined by the tracking loop shown) to enable the analyst to explore future excursions from the present condition. By analysis of the dynamics of the situation, the analyst can explore a wide range of feasible futures, including those that may reveal surprising behavior that is not intuitive—increasing the analyst’s awareness of unstable regions of behavior or the potential of subtle but potent triggering events.
8.3.2 Modeling Complex Situations and Human Behavior
The complex behavior noted in the prior example may result from random events, human free will, or the nonlinearity introduced by the interactions of many actors. The most advanced applications of M&S are those that seek to model environments (introduced in Section 4.4.2) that exhibit complex behaviors—emergent behaviors (surprises) that are not predictable from the individual contributing actors within the system. Complexity is the property of a system that prohibits the description of its overall behavior even when all of the components are described completely. Complex environments include social behaviors of significant interest to intelligence organizations: populations of nation states, terrorist organizations, military commands, and foreign leaders [32]. Perhaps the grand challenge of intelligence analysis is to understand an adversary’s cognitive behavior to provide both warning and insight into the effects of alternative preemptive actions that may avert threats.
Nonlinear mathematical solutions are intractable for most practical problems, and the research community has applied dynamic systems modeling and agent-based simulation (ABS) to represent systems that exhibit complex behavior [34]. ABS research is being applied to the simulation of a wide range of organizations to assess intent, decision making and planning (cognitive), com- mand and finances (symbolic), and actions (physical). The applications of these simulations include national policies [35], military C2 [36], and terrorist organizations [37].
9
The Intelligence Enterprise Architecture
The processing, analysis, and production components of intelligence operations are implemented by enterprises—complex networks of people and their business processes, integrated information and communication systems and technology components organized around the intelligence mission. As we have emphasized throughout this text, an effective intelligence enterprise requires more than just these components; the people require a collaborative culture, integrated electronic networks require content and contextual compatibility, and the implementing components must constantly adapt to technology trends to remain competitive. The effective implementation of KM in such enterprises requires a comprehensive requirements analysis and enterprise design (synthesis) approach to translate high-level mission statements into detailed business processes, networked systems, and technology implementations.
9.1 Intelligence Enterprise Operations
In the early 1990s the community implemented Intelink, a communitywide network to allow the exchange of intelligence between agencies that maintained internal compartmented networks [2]. The DCI vision for “a unified IC optimized to provide a decisive information advantage…” in the mid-1990s led to the IC CIO to establish an IC Operational Network (ICON) office to perform enterprise architecture analysis and engineering to define the system and communication architectures in order to integrate the many agency networks within the IC [3]. This architecture is required to provide the ability to collaborate securely and synchronously from the users’ desktops across the IC and with customers (e.g., federal government intelligence consumers), partners (component agencies of the IC), and suppliers (intelligence data providers within and external to the IC).
The undertaking illustrates the challenge of implementing a mammoth intelligence enterprise that is comprised of four components:
- Policies. These are the strategic vision and derivative policies that explicitly define objectives and the approaches to achieve the vision.
- Operational processes. These are collaborative and operationally secure processes to enable people to share knowledge and assets securely and freely across large, diverse, and in some cases necessarily compartmented organizations. This requires processes for dynamic modification of security controls, public key infrastructure, standardized intelligence product markup, the availability of common services, and enterprisewide search, collaboration, and application sharing.
- System (network). This is an IC system for information sharing (ICSIS) that includes an agreed set of databases and applications hosted within shared virtual spaces within agencies and across the IC. The system architecture (Figure 9.1) defines three virtual collaboration spaces, one internal to each organization and a second that is accessible across the community (an intranet and extranet, respectively). The internal space provides collaboration at the Special Compartmented Intelligence (SCI) level within the organization; owners tightly control their data holdings (that are organizationally sensitive). The community space enables IC-wide collaboration at the SCI level; resource protection and control is provided by a central security policy. A separate collateral community space provides a space for data shared with DoD and other federal agencies.
- The enterprise requires the integration of large installed bases of legacy components and systems with new technologies. The integration requires definition of standards (e.g., metadata, markup languages, protocols, and data schemas) and the plans for incremental technology transitions.
9.2 Describing the Enterprise Architecture
Two major approaches to architecture design that are immediately applicable to the intelligence enterprise have been applied by the U.S. DoD and IC for intelligence and related applications. Both approaches provide an organizing method- ology to assure that all aspects of the enterprise are explicitly defined, analyzed, and described to assure compatibility, completeness, and traceability back to the mission objectives. The approaches provide guidance to develop a comprehensive abstract model to describe the enterprise; the model may be understood from different views in which the model is observed from a particular perspective (i.e., the perspectives of the user or developer) and described by specific products that makeup the viewpoint.
The first methodology is the Zachman Architecture FrameworkTM, developed by John Zachman in the late1980s while at IBM. Zachman pioneered a concept of multiple perspectives (views) and descriptions (viewpoints) to completely define the information architecture [6]. This framework is organized as a matrix of 30 perspective products, defined by the cross product of two dimensions:
- Rows of the matrix represent the viewpoints of architecture stakeholders: the owner, planner, designer, builder (e.g., prime contractor), and subcontractor. The rows progress from higher level (greater degree of abstraction) descriptions by the owner toward lower level (details of implementation) by the subcontractor.
- Columns represent the descriptive aspects of the system across the dimensions of data handled, functions performed, network, people involved, time sequence of operations, and motivation of each stakeholder.
Each cell in the framework matrix represents a descriptive product required to describe an aspect of the architecture.
This framework identifies a single descriptive product per view, but permits a wide range of specific descriptive approaches to implement the products in each cell of the framework:
- Mission needs statements, value propositions, balanced scorecard, and organizational model methods are suitable to structure and define the owner’s high-level view.
- Business process modeling, the object-oriented Unified Modeling Language (UML), or functional decomposition using Integrated Definition Models (IDEF) explicitly describe entities and attributes, data, functions, and relationships. These methods also support enterprise functional simulation at the owner and designer level to permit evaluation of expected enterprise performance.
- Detailed functional standards (e.g., IEEE and DoD standards specification guidelines) provide guidance to structure detailed builder- and subcontractor–level descriptions that define component designs.
The second descriptive methodology is the U.S. DoD Architecture Frame- work (formally the C4ISR Architecture Framework), which defines three inter- related perspectives or architectural views, each with a number of defined products [7]. The three interrelated views (Figure 9.2) are as follows:
-
- Operational architecture is a description (often graphical) of the operational elements, intelligence business processes, assigned tasks, work- flows, and information flows required to accomplish or support the intelligence function. It defines the type of information, the frequency of exchange, and what tasks are supported by these information exchanges.
- Systems architecture is a description, including graphics, of the systems and interconnections providing for or supporting intelligence functions. The system architecture defines the physical connection, location, and identification of the key nodes, circuits, networks, and users and specifies system and component performance parameters. It is constructed to satisfy operational architecture requirements per standards defined in the technical architecture. This architecture view shows how multiple systems within a subject area link and interoperate and may describe the internal construction or operations of particular systems within the architecture.
- Technical architecture is a minimal set of rules governing the arrangement, interaction, and interdependence of the parts or elements whose purpose is to ensure that a conformant system satisfies a specified set of requirements. The technical architecture identifies the services, interfaces, standards, and their relationships. It provides the technical guidelines for implementation of systems upon which engineering specifications are based, common building blocks are built, and product lines are developed.
Both approaches provide a framework to decompose the enterprise into a comprehensive set of perspectives that must be defined before building; following either approach introduces the necessary discipline to structure the enterprise architecture design process.
The emerging foundation for enterprise architecting using framework models is distinguished from the traditional systems engineering approach, which focuses on optimization, completeness, and a build-from-scratch originality [11]. Enterprise (or system) architecting recognizes that most enterprises will be constructed from a combination of existing and new integrating components:
- Policies, based on the enterprise strategic vision;
- People, including current cultures that must change to adopt new and changing value propositions and business processes;
- Systems, including legacy data structures and processes that must work with new structures and processes until retirement;
- IT, including legacy hardware and software that must be integrated with new technology and scheduled for planned retirement.
The adoption of the architecture framework models and system architecting methodologies are developed in greater detail in a number of foundational papers and texts [12].
9.3 Architecture Design Case Study: A Small Competitive Intelligence Enterprise
The enterprise architecture design principles can be best illustrated by developing the architecture description for a fictional small-scale intelligence enterprise: a typical CI unit for a Fortune 500 business. This simple example defines the introduction of a new CI unit, deliberately avoiding the challenges of introducing significant culture change across an existing organization and integrating numerous legacy systems.
The CI unit provides legal and ethical development of descriptive and inferential intelligence products for top management to assess the state of competitors’ businesses and estimate their future actions within the current marketplace. The unit is not the traditional marketing function (which addresses the marketplace of customers) but focuses specifically on the competitive environment, especially competitors’ operations, their business options, and likely decision-making actions.
The enterprise architect recognizes the assignment as a corporate KM project that should be evaluated against O’Dell and Grayson’s four-question checklist for KM projects [14]:
- Select projects to advance your business performance. This project will enhance competitiveness and allow FaxTech to position and adapt its product and services (e.g., reduce cycle time and enhance product development to remain competitive).
- Select projects that have a high success probability. This project is small, does not confront integration with legacy systems, and has a high probability of technical success. The contribution of KM can be articulated (to deliver competitive intelligence for executive decision making), there is a champion on the board (the CIO), and the business case (to deliver decisive competitor knowledge) is strong. The small CI unit implementation does not require culture change in the larger Fax- Tech organization—and it may set an example of the benefits of collaborative knowledge creation to set the stage for a larger organization-wide transformation.
- Select projects appropriate for exploring emerging technologies. The project is an ideal opportunity to implement a small KM enterprise in FaxTech that can demonstrate intelligence product delivery to top management and can support critical decision making.
- Select projects with significant potential to build KM culture and discipline within the organization. The CI enterprise will develop reusable processes and tools that can be scaled up to support the larger organization; the lessons learned in implementation will be invaluable in planning for an organization-wide KM enterprise.
9.3.1 The Value Proposition
The CI value proposition must define the value of competitive intelligence.
The quantitative measures may be difficult to define; the financial return on CI investment measure, for example, requires a careful consideration of how the derived intelligence couples with strategy and impacts revenue gains. Kilmetz and Bridge define a top-level measure of CI return on investment (ROI) metric that considers the time frame of the payback period (t, usually updated quarterly and accumulated to measure the long-term return on strategic decisions) and applies the traditional ROI formula, which subtracts the cost of the CI investment (C CI+I,, the initial implementation cost, plus accumulating quarterly operations costs using net present values) from the revenue gain [17]:
ROICI =∑[(P×Q)−CCI+I]t
The expected revenue gain is estimated by the increase in sales (units sold, Q, multiplied by price, P, in this case) that are attributable to CI-induced decisions. Of course, the difficulty in defining such quantities is the issue of assuring that the gains are uniquely attributable to decisions possible only by CI information [18].
In building the scorecard, the enterprise architect should seek the lessons learned from others, using sources such as the Society for Competitive Intelligence Professionals or the American Productivity and Quality Center
9.3.2 The CI Business Process
The Society of Competitive Intelligence Professionals has defined a CI business cycle that corresponds to the intelligence cycle; the cycle differs by distinguishing primary and published source information, while eliminating the automated processing of technical intelligence sources. The five stages, or business processes, of this high-level business model include:
- Planning and direction. The cycle begins with the specific identification of management needs for competitive intelligence. Management defines the specific categories of competitors (companies, alliances) and threats (new products or services, mergers, market shifts, technology discontinuities) for focus and the specific issues to be addressed. The priorities of intelligence needed, routine reporting expectations, and schedules for team reporting enables the CI unit manager to plan specific tasks for analysts, establish collection and reporting schedules, and direct day-to-day operations.
- Published source collection. The collection of articles, reports, and financial data from open sources (Internet, news feeds, clipping services, commercial content providers) includes both manual searches by analysts and active, automated searches by software agents that explore (crawl) the networks and cue analysts to rank-ordered findings. This collection provides broad, background knowledge of CI targets; the results of these searches provide cues to support deeper, more focused primary source collection.
- Primary source collection. The primary sources of deep competitor information are humans with expert knowledge; ethical collection process includes the identification, contact, and interview of these individuals. Such collections range from phone interviews, formal meetings, and consulting assignments to brief discussions with competitor sales representatives at trade shows. The results of all primary collections are recorded on standard format reports (date, source, qualifications, response to task requirement, results, further sources suggested, references learned) for subsequent analysis.
- Analysis and production. Once indexed and organized, the corpus of data is analyzed to answer the questions posed by the initial tasks. Collected information is placed in a framework that includes organizational, financial, and product-service models that allow analysts to estimate the performance and operations of the competitor and predict likely strategies and planned activities. This process relies on a synoptic view of the organized information, experience, and judgment. SMEs may be called in from within FaxTech or from the outside (consultants) to support the analysis of data and synthesis of models.
- Reporting. Once approved by the CI unit manager, these quantitative models and more qualitative estimative judgments of competitor strategies are published for presentation in a secure portal or for formal presentation to management. As result of this reporting, management provides further refining direction and the cycle repeats.
9.3.4 The CI Unit Organizational Structure and Relationships
This manager accepts tasking from executive management, issues detailed tasks to the analytic team, and then reviews and approves results before release to management. The manager also manages the budget, secures consultants for collection or analysis support, manages special collections, and coordinates team training and special briefings by SMEs.
9.3.5 A Typical Operational Scenario
For each of the five processes, a number of use cases may be developed to describe specific actions that actors (CI team members or system components) perform to complete the process. In object-oriented design processes, the devel- opment of such use cases drives the design process by first describing the many ways in which actors interact to perform the business process [22]. A scenario or process thread provides a view of one completed sequence through a single or numerous use case(s) to complete an enterprise task. A typical crisis response scenario is summarized in Table 9.3 to illustrate the sequence of interactions between the actors (management, CI manager, deputy, knowledge-base man- ager and analysts, system, portal, and sources) to complete a quick response thread. The scenario can be further modeled by an activity diagram [23] that models the behavior between objects.
The development of the operational scenario also raises nonfunctional performance issues that are identified and defined, generally in parametric terms, for example:
- Rate and volume of data ingested daily;
- Total storage capacity of the on-line and offline archived holdings;
- Access time for on-line and off-line holdings;
- Number of concurrent analysts, searches, and portal users;
- Information assurance requirements (access, confidentiality, and attack rejection).
9.3.6 CI System Abstraction
The purpose of use cases and narrative scenarios is to capture enterprise behavior and then to identify the classes of object-oriented design. The italicized text in the scenario identifies the actors, and the remaining nouns are candidates for objects (instantiated software classes). From these use cases, software designers can identify the objects of design, their attributes, and interactions. Based upon the use cases, object-oriented design proceeds to develop sequence diagrams that model messages passing between objects, state diagrams that model the dynamic behavior within each object, and object diagrams that model the static description of objects. The object encapsulates state attributes and provides services to manipulate the internal attributes
Based on the scenario of the last section, the enterprise designer defines the class diagram (Figure 9.7) that relates objects that accept the input CI requirements through the entire CI process to a summary of finished intelligence. This diagram does not include all objects; the objects presented illustrate those that acquire data related to specific competitors, and these objects are only a subset of the classes required to meet the full enterprise requirements defined earlier. (The objects in this are included in the analysis package described in the next section.) The requirement object accepts new CI requirements for a defined competitor; requirements are specified in terms of essential elements of information (EEI), financial data, SWOT characteristics, and organization structure. In this object, key intelligence topics may be selected from predefined templates to specify specific intelligence requirements for a competitor or for a marketplace event [24]. The analyst translates the requirements to tasks in the task object; the task object generates search and collect objects that specify the terms for automated search and human collection from primary sources, respectively. The results of these activities generate data objects that organize and present accumulated evidence that is related to the corresponding search and collect objects.
The analyst reviews the acquired data, creating text reports and completing analysis templates (SWOT, EEI, financial) in the analysis object. Analysis entries are linked to the appropriate competitor in the competitor list and to the supporting evidence in data objects. As results are accumulated in the templates, the status (e.g., percentage of required information in template completed) is computed and reported by the status object. Summary of current intelligence and status are rolled up in the summary object, which may be used to drive the CI portal.
9.3.7 System and Technical Architecture Descriptions
The abstractions that describe functions and data form the basis for partitioning packages of software services and the system hardware configuration. The system architecture description includes a network hardware view (Figure 9.8, top) and a comparable view of the packaged software objects (Figure 9.8, bottom)
The enterprise technical architecture is described by the standards for commercial and custom software packages (e.g., the commercial and developed software components with versions, as illustrated in Table 9.4) to meet the requirements developed in system model row of the matrix. Fuld & Company has published periodic reviews of software tools to support the CI process; these reviews provide a helpful evaluation of available commercial packages to support the CI enterprise [25]. The technical architecture is also described by the standards imposed on the implementing components—both software and hardware. These standards include general implementation standards [e.g., American National Standards Institute (ANSI), International Standards Organization (ISO), and Institute of Electrical and Electronics Engineers (IEEE)] and federal standards regulating workplace environments and protocols. The applicable standards are listed to identify applicability to various functions within the enterprise.
A technology roadmap should also be developed to project future transitions as new components are scheduled to be integrated and old components are retired. It is particularly important to plan for integration of new software releases and products to assure sustained functionality and compatibility across the enterprise.
10
Knowledge Management Technologies
IT has enabled the growth of organizational KM in business and government; it will continue to be the predominant influence on the progress in creating knowledge and foreknowledge within intelligence organizations.
10.1 Role of IT in KM
When we refer to technology, the application of science by the use of engineering principles to solve a practical problem, it is essential that we distinguish the difference between three categories of technologies that all contribute to our ability to create and disseminate knowledge (Table 10.1). We may view these as three technology layers, with the basic computing materials sciences providing the foundation technology applications for increasing complexity and scale of communications and computing.
10.4.1 Explicit Knowledge Combination Technologies
Future explicit knowledge combination technologies include those that trans- form explicit knowledge into useable forms and those that perform combination processes to create new knowledge.
- Multimedia content-context tagged knowledge bases. Knowledgebase technology will support the storage of multimedia data (structured and unstructured) with tagging of both content and context to allow com- prehensive searches for knowledge across heterogeneous sources.
- Multilingual natural language. Global natural language technologies will allow accurate indexing, tagging, search, linking, and reasoning about multilingual text (and recognized human speech at both the content level and the concept level. This technology will allow analysts to conduct multilingual searches by topic and concept at a global scale
- Integrated deductive-inductive reasoning. Data-fusion and-data mining technologies will become integrated to allow interactive deductive and inductive reasoning for structured and unstructured (text) data sources. Data-fusion technology will develop level 2 (situation) and level 3 (impact, or explanation) capabilities using simulations to represent complex and dynamic situations for comparison with observed situations.
- Purposeful deductive-inductive reasoning. Agent-based intelligence will coordinate inductive (learning and generalization) and deductive (decision and detection) reasoning processes (as well as abductive explanatory reasoning) across unstructured multilingual natural language, common sense, and structured knowledge bases. This reasoning will be goal-directed based upon agent awareness of purpose, values, goals, and beliefs.
- Automated ontology creation. Agent-based intelligence will learn the structure of content and context, automatically populating knowledge bases under configuration management by humans.
10.4.3 Knowledge-Based Organization Technologies
Technologies that support the socialization processes of tacit knowledge exchange will enhance the performance and effectiveness of organizations; these technologies will increasingly integrate intelligence agents into the organization as aids, mentors, and ultimately as collaborating peers.
- Tailored naturalistic collaboration. Collaboration technologies will provide environments with automated capabilities that will track the con- text of activities (speech, text, graphics) and manage the activity toward defined goals. These environments will also recognize and adapt to individual personality styles, tailoring the collaborative process (and the mix of agents-humans) to the diversity of the human-team composition.
- Intimate tacit simulations. Simulation and game technologies will enable human analysts to be immersed in the virtual physical, symbolic, and cognitive environments they are tasked to understand. These technologies will allow users to explore data, information, and complex situations in all three domains of reality to gain tacit experience and to be able to share the experience with others.
- Human-like agent partners. Multiagent system technologies will enable the formation of agent communities of practice and teams—and the creation of human-agent organizations. Such hybrid organizations will enable new analytic cultures and communities of problem-solving.
- Combined human-agent learning. Personal agent tutors, mentors, and models will shadow their human partners, share experiences and observations, and show what they are learning. These agents will learn monitor subtle human cues about the capture and use of tacit knowledge in collaborative analytic processes.
- Direct brain tacit knowledge. Direct brain biological-to-machine connections will allow monitors to provide awareness, tracking, articulation, and capture of tacit experiences to augment human cognitive performance.
10.5 Summary
KM technologies are built upon materials and ITs that enable the complex social (organizational) and cognitive processes of collaborative knowledge creation and dissemination to occur over large organizations, over massive scales of knowledge. Technologists, analysts, and developers of intelligence enterprises must monitor these fast-paced technology developments to continually reinvent the enterprise to remain competitive in the global competition for knowledge. This continual reinvention process requires a wise application of technology in three modes. The first mode is the direct adoption of technologies by upgrade and integration of COTS and GOTS products. This process requires the continual monitoring of industry standards, technologies, and the marketplace to project the lifecycle of products and forecast adoption transitions. The second application mode is adaptation, in which a commercial product component may be adapted for use by wrapping, modifying, and integrating with commercial or custom components to achieve a desired capability. The final mode is custom development of a technology unique to the intelligence application. Often, such technologies may be classified to protect the unique investment in, the capability of, and in some cases even the existence of the technology.
Technology is enabling, but it is not sufficient; intelligence organizations must also have the vision to apply these technologies while transforming the intelligence business in a rapidly changing world.