The Power of Web-Based Updates for Small Businesses

Photo via Pexels

In today’s digital landscape, small businesses are constantly seeking ways to stay competitive and relevant. One of the most powerful tools at their disposal is the integration of web-based updates into their operations. In this article we’ll explore how these updates can transform the way small businesses operate and propel them towards success.

Facilitate Seamless Transactions

Integrating secure online payment processing systems into your website can streamline transactions and enhance customer convenience. By offering multiple payment options and ensuring robust security measures, you instill trust in your customers, leading to increased sales and loyalty. With seamless payment processing, you remove barriers to purchase, ultimately boosting your bottom line. Additionally, real-time transaction tracking and reporting capabilities provided by many online payment platforms allow for better financial management and decision-making.

Empower Your Business with Cybersecurity Education

Investing in cybersecurity education is paramount in today’s digital age. Small businesses are prime targets for cyberattacks, making it essential to protect sensitive data and secure network systems. Pursuing an online degree or certification in cybersecurity equips you with the knowledge and skills needed to safeguard your business against potential threats. With flexible online learning options, you can enhance your cybersecurity posture without disrupting your day-to-day operations. Moreover, staying informed about the latest cybersecurity trends and best practices empowers you to proactively identify and mitigate potential vulnerabilities, reducing the risk of costly data breaches. When you’re ready to look for online programs, check this out.

Expand Reach through Strategic Online Advertising

Online advertising offers small businesses unparalleled opportunities to expand their reach and target specific demographics. By investing in targeted online ad space, you can reach potential customers at the right place and time, maximizing the effectiveness of your marketing efforts. Whether through social media ads, search engine marketing, or display advertising, strategic online advertising can drive traffic to your website and generate leads, ultimately fueling business growth. Additionally, the ability to track and analyze ad performance metrics in real time allows for continuous optimization and refinement of your advertising strategy, ensuring maximum return on investment.

Optimize for Mobile Accessibility

With the majority of internet users accessing content from mobile devices, optimizing your website for mobile accessibility is no longer optional — it’s imperative. A mobile-responsive website ensures that users have a seamless browsing experience across all devices, enhancing user satisfaction and engagement. By catering to the needs of mobile users, you can tap into a vast market segment and stay ahead of competitors who neglect mobile optimization. Additionally, mobile-friendly websites are preferred by search engines, resulting in higher rankings in mobile search results and increased organic traffic.

Promote Inclusivity with Accessible Web Design

Creating an inclusive online environment is not only the right thing to do — it’s also good for business. By prioritizing accessibility features in your web design, you ensure that users of all abilities can access and interact with your website. This inclusivity not only expands your customer base but also enhances your brand reputation as a socially responsible business. By making accessibility a priority, you demonstrate your commitment to serving all customers, regardless of their limitations. Moreover, accessible web design improves usability for all users, leading to higher engagement levels and increased conversion rates.

Drive Engagement with Fresh Website Content

Regularly updating your website with fresh, relevant content is key to driving engagement and attracting new visitors. Whether through blog posts, articles, or multimedia content, providing value-added resources keeps users coming back for more. Moreover, fresh content signals to search engines that your website is active and relevant, improving your visibility in search results. By consistently delivering valuable content, you establish your authority in your industry and foster trust with your audience. Additionally, interactive elements such as quizzes, polls, and user-generated content can further enhance engagement and encourage social sharing, amplifying your online presence and brand awareness.

Incorporating web-based updates is essential for small businesses striving to thrive in today’s digital landscape. From enhancing customer experiences to expanding market reach, these updates offer a myriad of opportunities for growth and success. By embracing these strategies, small businesses can stay ahead of the curve and achieve their goals effectively.

Ariel Sheen provides a variety of services, including copywriting and tutoring. Take a look at the blog today to learn more and subscribe.

Mastering Workflow Optimization: A Guide to Efficiency and Effectivenes

Image: Freepik

In today’s competitive business environment, operational excellence is not just a goal but a necessity. The cornerstone of such excellence lies in how effectively and efficiently a company manages its workflows. A well-optimized workflow can be the difference between a business that merely survives and one that truly thrives. In this article we will explore innovative methods to optimize workflows for enhanced efficiency and effectiveness.

Analyze Existing Procedures and Workflows

Begin by conducting a thorough analysis of your current operational activities. This should include identifying bottlenecks that may be slowing down processes and recognizing areas where redundancies may exist. Think of this phase as an audit, one that will serve as a baseline for improvement. Tools such as business process management software can assist in this review, mapping out existing workflows and highlighting areas for improvement.

Engage Team Members for Insights

While managerial perspectives are important, sometimes the most invaluable insights come from those on the front lines. Establish an environment of open communication, urging team members to share their experiences and viewpoints. Employee feedback can be a treasure trove of information that reveals hidden challenges and opportunities. For instance, Slack or Microsoft Teams can be platforms for gathering such feedback effectively, making communication more seamless.

Streamline the New Employee Onboarding Process

Bringing new talent on board should not be a time-consuming or confusing process. Clear documentation, guided training modules, and mentorship programs can make this transition smoother. This not only reduces the time it takes for new hires to become productive but also enhances the overall employee experience. Technologies like learning management systems (LMS) can be employed to facilitate this process.

Implement Digital Invoices

Establishing a solid bookkeeping and digital invoicing system is crucial for the financial health and organization of any business. It ensures accurate tracking of income and expenses, aids in financial planning, and streamlines the billing process. Utilizing a free online invoice maker can significantly enhance this aspect of your business. These tools allow you to create custom invoices by choosing from dozens of templates, fonts, and design elements, making it easy to reflect your brand’s identity while maintaining professionalism. The convenience of an invoice maker not only saves time but also helps in managing cash flow more effectively, providing a clear and efficient way to bill clients and track payments.

Minimize Multitasking to Enhance Focus

The myth of multitasking as an effective way to get more done has been debunked by numerous studies. Rather, encourage an environment where employees can focus on single tasks, thereby enhancing quality and productivity. Time management software like Pomodoro timers can help in setting specific periods for focused work.

Make Marketing Easy with Content Marketing

As an entrepreneur, harnessing the power of content marketing practices like SEO and keywording is pivotal in amplifying your business’s online reach. These strategies enhance your visibility on search engines, drawing more potential customers to your website by optimizing your content with relevant keywords. This process involves understanding what your target audience is searching for and integrating those terms into your web pages, blogs, and social media posts in a natural, engaging manner.

The beauty of this approach is the abundance of free online resources available, offering tips and best practices to guide you through optimizing your content. By leveraging these resources, you can effectively increase your digital footprint, attract a larger audience, and drive more business, all without the need for a significant financial investment.

Reevaluate the Necessity of Meetings

Meetings should be purposeful and action-oriented. Overly frequent or extended meetings can often detract from productive time. Digital Kanban boards can be an alternative for status updates, making it easier to track project milestones without the need for time-consuming meetings.

Foster a Collaborative Environment

A culture that promotes collaboration often results in optimized workflows. Tools like collaborative software, or even something as simple as a communal whiteboard, can facilitate brainstorming and project planning. By pooling collective skills and insights, teams can troubleshoot problems more efficiently, formulate better strategies, and execute tasks more effectively.

Achieving optimized workflows is not a one-time activity but an ongoing pursuit. Embracing continuous improvement, using automation, engaging team members, and integrating technology like customer data platforms are all essential steps toward a more efficient and effective business landscape.

Socialist Affiliated Transnational Advocacy Networks: The Thread Linking New Curriculum Trends to Red Foreign Governments

February 22, 2021 speech by Nicolás Maduro

Key Takeaways

  1. Secessionist and socialist parties in the United States view control over school curriculum as a strategy to help them achieve political revolution.
  2. Revolutionary political parties advocating black nationalism, indigenous revanchism, and socialism that are affiliated with Venezuela and Cuba have formed a United Front to subvert American institutions.
  3. Elected officials and citizen groups in the U.S. must exercise greater vigilance and diligence over state employment.

***

Curriculum and teaching styles recently being promoted in many American K-12 classrooms and colleges is remarkably similar to those implemented in Venezuela after socialist dictator Hugo Chávez came to power.

Controversies about adoption of critical race theory (CRT), liberated ethnic studies (LES) and social justice education (SJE) have overlooked the affiliations of advocates for and practitioners of CRT, LES, and SJE with the United Socialist Party of Venezuela and the Cuban Communist Party.

In what follows, we show how Cuba and Venezuela’s goal of colonizing Latin America has led them not only to ally militarily and economically with Russia, China, and Iran, but also politically with Socialist Affiliated Transnational Advocacy Networks with members employed in the American education field.

The “Matrix of Afro-Indigenous-Socialism” and the Subversion of High School Curriculum

ormer Venezuelan ambassador to New Orleans and professor Jesus “Chucho” Garcia explaining in an interview at North Carolina Central University on August 27, 2015 how the São Paulo Forum functions as a place to plan and implement subversive activities by exploiting identity politics.

Evidence of Venezuelan influence in America’s education system can be seen in the recent controversy surrounding California’s ethnic studies program.

In the minutes of an August 24, 2021 Orange County Board of Education Special Meeting Elina Kaplan of the Alliance for Constructive Ethnics Studies describes how the curriculum proposed by the Liberated Ethnic Studies Model Curriculum Consortium (LESMCC) and advocated for by the Association of Raza Educators(ARE) includes teaching about “154 role models of color of whom the preponderance are neo-Marxist and/or violent figures.” One such role model proposed student curriculum includes Oscar Lopez Rivera, “the leader of the Marxist-Leninist organization that carried out over 130 bombings in the U.S.”.

What Mrs. Kaplan, the plaintiffs of a recent lawsuit against the passage of this curriculum in California, the Californians for Equal Rights Foundation, as well as other critics and commentators on Liberated Ethnic Studies Model Curriculum and American schools have not addressed is that one of the principal advocates for the passage of such curriculum is affiliated with the Venezuelan government.

ARE is repeatedly described by Union del Barrio (UdB) – a self-avowed revolutionary organization whose first plank of their political program is “…to advance the liberation and reunification of Mexico under a revolutionary government, immediately accountable to the people.” – as the teachers wing of their organization. Extensive overlaps in membership and activities, confirms this claim.

Liberated Ethnic Studies Model Curriculum Consortium, the group which proposed LESMC to California, was founded by Lupe Carrasco Cardona, a UdB member and Praxis chair of ARE’s Los Angeles Chapter and advisor Theresa Montano is a Board Member of ARE.

CastroChavistas in the Classroom

Left to Right: Facebook post with Union Del Barrio saying they “help the Bolivarian Revolution and the combative people of Venezuela”; news coverage of Union del Barrio’s anti-policing activities; Benjamin Prado being interviewed by TeleSUR at a political conference in Caracas, Venezuela.

UdB describes Ernesto Bustillos, their former General Secretary, traveling to Havana and Caracas to discuss strategy and collaboration and describes his work as a middle school teacher as follows: “Between teaching periods, compa Neto could often be found in his classroom talking politics… [he was] constantly pivoting back and forth between revolutionary teacher and teacher of revolution.” and describes.

A presentation at the 2003 National Association for Chicana and Chicano Studies conference raises the possibly that an Aztlán state – consisting of Texas, Arizona, New Mexico, Nevada and California –could be achieved through a functional political alliance with Venezuela and Cuba. In a 2008 speech Bustillos upon receipt of an award from ARE, he stated that “You can’t call yourself an educator if you are not part of some kind of revolutionary struggle!”

In July of 2010 Union Del Barrio, as well as the American Federation of Teachers local 3267, American Federation of Teachers local 1936, Union California Faculty Association sent representatives to Caracas, Venezuela to attend the third conference of Encuentro Sindical Nuestra América (ESNA) – a transnational union network founded by Hugo Chavez to coordinate political activities. Since at least 2012 Union Del Barrio has been a member of the Coordinating Group of ESNA.

On August 27, 2016, Secretario General of Union Del Barrio, Rommel Díaz declared in a speechcommemorating UdB’s “35th year of continuous struggle for raza liberation” that:

“The emancipatory project of the Bolivarian Alliance for the Peoples of Our America – Trade Treaty of the Peoples (ALBA-TCP), created in 2005, by Presidents Fidel Castro of Cuba and Hugo Chavez of Venezuela… represents one aspect of the future liberation of our peoples.”

On January 24, 2017 Union Del Barrio signed an “eternal commitment to the legacy of the undefeated Commanders Fidel Castro and Hugo Chávez.” and in June of 2022 UdB declared their commitment to – amongst other action items – advocating for leftists convicted of money laundering, terrorism, narcotics trafficking, and murder to be released from prison.

CastroChavismo and American College Professors

UdB is not the only group with a revolutionary nationalist orientation that is affiliated with the Cuban Communist Party and the United Socialist Party of Venezuela working within the professional ranks of the American education system.

Several months before Hugo Chávez launched ESNA, the Bolivarian government hosted the International Meeting of Left Parties in Caracas on November 19th-21st, 2009. This event brought together socialists from all over the world to develop strategic plans to be implemented in the countries in which they live.

Dr. Ann Robertson, a former professor of philosophy at San Francisco State University, and member of Workers Action – a Revolutionary Socialist Organization –  and current member of the Democratic Socialist of America – San Francisco Education Organizing Circle published an article in December of 2009 on the website Venezuela Analysis titled Hugo Chavez’s Call for a Fifth Socialist International. In Dr. Robertson’s commentary she states that by: “joining such an international, socialist parties [in the United States] will be able to translate their aspirations for a better world into a framework that can realistically hope to achieve revolutionary change. It has the potential to forge the indispensable link between theory and practice.”

The theory and practice to which she refers to, per her writings advocating for an “undistorted” Revolutionary Marxist Party, includes the development of curriculum that seeks to indoctrinate youth with anti-capitalist and anti-Constitutional values – as critical race theory (CRT), liberated ethnic studies (LES) and social justice education (SJE) do. The DSA is open about their goals, stating on their website that “There is a growing national network of educators in DSA working to transform our schools, our unions, and our society. Being a member of DSA means there is a pre-existing network of fellow socialists you can tap for support as you undertake this work.”

Rounding out the political groups linked to a matrix of “black indigenous socialism” we can turn to New Afrikan People’s Organization (NAPO)and the Malcolm X Grassroots Movement (MXGM) – two groups that both define themselves as being a Nation engaged in struggle against the “occupation” of the U.S. government – they seek independence over a region including parts of Louisiana, Mississippi, Arkanasa and Tennessee – and that has participated extensively in the Black Lives Matter protests and related organizing activities.

One of the MXGM’s founders, Nehanda Abiodu, fled to Cuba to avoid being charged with conspiracy and racketeering for her role in helping Assata Shakur escape from prison in 1979, and her role in a 1981 Brink’s armored truck robbery – which involved the murder of three people. Another of the groups co-founders is Dr. Akinyele Umoja, a professor of Africana studies at Georgia State University.

Dr. Umoja has travelled to Venezuela several times to speak with Nicolás Maduro and invited Venezuelan Jesus “Chucho” Garcia  to speak on a 2020 MXGM panel discussion on International Solidarity: Connecting Struggles for Self-Determination, an academic conference at Georgia State University, and the 2015 memorial service for the former legal representation of the Black Liberation Army, Chokwe Lumumba.

“Chucho” – notably – has also networked with pro-secessionist activists at the 2016 Southern Human Rights Organizing Conference, spoken on a 2016 panel hosted by Black Alliance for Justice Immigration -– a network which assists immigrants that entered the U.S. illegally – that was moderated by Black Lives Matter co-founder Opal Tometi and former senior advisor for Bernie Sanders 2020 presidential campaign, Phillip Agnew; given presentations in 2018 on the National Council of Black Studies: International Committee, on how Venezuela is “providing leadership for the organization of Black Left networks across the Americas, [and] linking and organizing Black radicals in the U.S. and throughout Latin America and the Caribbean”. Chucho is even an advisory board member of the Walter Rodney Foundation – one of over a dozen school-related programs which promotes education directed at developing revolutionaries amongst college and K-12 students.

MXGM members operate daycares, schools, youth programs, and give presentations at professional teachers conferences.

The International Context of Critical Race Theory

Parents and politicians are increasingly aware that something is rotten in the state of American education.

What is currently at stake, however, is not merely a conflict over ideas, but a geopolitical struggle lead in the U.S. by socialist affiliated transnational networks working in the education system.

The recent rise in advocacy for new critical race theory (CRT), liberated ethnic studies (LES) and social justice education (SJE) curriculum is evidence of the success of the “matrix of afro-indigenous socialism” in the United States.

While this particular coalition is new, the practice is, notably, just a modern iteration of the historic United Front policies of the U.S. Communist party when being directed by the Soviet Union and reflects what Hugo Chávez proposed at his meeting to form a Fifth International: a “united front that would include the supporters of the struggle against imperialism, from radical nationalism to revolutionary socialist currents”.

The cases described above, which represent a small fraction of such activities, shows how organized cadres have sought to use the educational system to undermine the rule of law, the U.S. constitutional order, and America’s national security, sovereignty, and economic prosperity in order to further their agendas – agendas that are, notably, linked to the geopolitical goals of a designated state-sponsor of terrorism (Cuba) and a state whose officials assist narcotics trafficking groups to maintain control over that state (Venezuela).

Paths for Corrective Action

While American’s right to believe in and preach the benefits of adopting policies is sacrosanct even if they are anti-Constitutional, it is folly for state and local governments to give those with subversive intentions employment and administrative authority over a captive audience.

The boldness with which political activists have used public institutions to forward their goals is testament to the poor oversight of school administrators and citizens organizations.

Many States have Educator Certification Requirements which mandate that educators not use their position as a public servant to indoctrinate youth in a manner that is subversive.

In Chapter 1012.56 of Florida Statues , for example, it describes how each prospective teachers must: “File an affidavit that the applicant subscribes to and will uphold the principles incorporated in the Constitution of the United States and the Constitution of the State of Florida and that the information provided in the application is true, accurate, and complete.”

 Participating in organized secessionist and revolutionary political groups affiliated with a foreign state does, in our non-legal expert opinion, meets that criteria and thus justifies the firing of educational professional and that use their position for political proselytization. 

While the constitutionality of firing teachers for their political affiliations is open for debate, parents are not prohibited from to submit appeals to their State Department of Education for the decertification or disqualification of such employees and to submit appeals to the U.S. Department of Education to stop providing grants to academic institutions that employ professors affiliated with foreign governments.

Review of Psychology of Intelligence Analysis

Psychology of Intelligence Analysis by Richards J. Heuer, Jr.

Introduction

Improving Intelligence Analysis at CIA: Dick Heuer’s Contribution to Intelligence Analysis

By

Jack Davis served with the Directorate of Intelligence (DI), the National Intelligence Council, and the Office of Training during his CIA career.

Dick Heuer’s ideas on how to improve analysis focus on helping analysts compensate for the human mind’s limitations in dealing with complex problems that typically involve ambiguous information, multiple players, and fluid circumstances. Such multi-faceted estimative challenges have proliferated in the turbulent post-Cold War world.

Leading Contributors to Quality of Analysis

Intelligence analysts, in seeking to make sound judgments, are always under challenge from the complexities of the issues they address and from the demands made on them for timeliness and volume of production. Four Agency individuals over the decades stand out for having made major contributions on how to deal with these challenges to the quality of analysis.

My short list of the people who have had the greatest positive impact on CIA analysis consists of Sherman Kent, Robert Gates, Douglas MacEachin, and Richards Heuer.

Sherman Kent

Sherman Kent’s pathbreaking contributions to analysis cannot be done justice in a couple of paragraphs

Kent’s greatest contribution to the quality of analysis was to define an honorable place for the analyst–the thoughtful individual “applying the instruments of reason and the scientific method”–in an intelligence world then as now dominated by collectors and operators.

In a second (1965) edition of Strategic Intelligence, Kent took account of the coming computer age as well as human and technical collectors in proclaiming the centrality of the analyst:

Whatever the complexities of the puzzles we strive to solve and whatever the sophisticated techniques we may use to collect the pieces and store them, there can never be a time when the thoughtful man can be supplanted as the intelligence device supreme.

More specifically, Kent advocated application of the techniques of “scientific” study of the past to analysis of complex ongoing situations and estimates of likely future events. Just as rigorous “impartial” analysis could cut through the gaps and ambiguities of information on events long past and point to the most probable explanation, he contended, the powers of the critical mind could turn to events that had not yet transpired to determine the most probable developments

To this end, Kent developed the concept of the analytic pyramid, featuring a wide base of factual information and sides comprised of sound assumptions, which pointed to the most likely future scenario at the apex.

Robert Gates

Bob Gates served as Deputy Director of Central Intelligence (1986-1989) and as DCI (1991-1993). But his greatest impact on the quality of CIA analysis came during his 1982-1986 stint as Deputy Director for Intelligence (DDI).

Gates’s ideas for overcoming what he saw as insular, flabby, and incoherent argumentation featured the importance of distinguishing between what analysts know and what they believe–that is, to make clear what is “fact” (or reliably reported information) and what is the analyst’s opinion (which had to be persuasively supported with evidence). Among his other tenets were the need to seek the views of non-CIA experts, including academic specialists and policy officials, and to present alternate future scenarios.

Using his authority as DDI, he reviewed critically almost all in-depth assessments and current intelligence articles prior to publication. With help from his deputy and two rotating assistants from the ranks of rising junior managers, Gates raised the standards for DDI review dramatically–in essence, from “looks good to me” to “show me your evidence.”

As the many drafts Gates rejected were sent back to managers who had approved them–accompanied by the DDI’s comments about inconsistency, lack of clarity, substantive bias, and poorly supported judgments–the whole chain of review became much more rigorous. Analysts and their managers raised their standards to avoid the pain of DDI rejection. Both career advancement and ego were at stake.

The rapid and sharp increase in attention paid by analysts and managers to the underpinnings for their substantive judgments probably was without precedent in the Agency’s history. The longer term benefits of the intensified review process were more limited, however, because insufficient attention was given to clarifying tradecraft practices that would promote analytic soundness. More than one participant in the process observed that a lack of guidelines for meeting Gates’s standards led to a large amount of “wheel-spinning.”

Douglas MacEachin

Doug MacEachin, DDI from 1993 to 1996, sought to provide an essential ingredient for ensuring implementation of sound analytic standards: corporate tradecraft standards for analysts. This new tradecraft was aimed in particular at ensuring that sufficient attention would be paid to cognitive challenges in assessing complex issues.

MacEachin’s university major was economics, but he also showed great interest in philosophy. His Agency career–like Gates’–included an extended assignment to a policymaking office. He came away from this experience with new insights on what constitutes “value-added” intelligence usable by policymakers. Subsequently, as CIA’s senior manager on arms control issues, he dealt regularly with a cadre of tough- minded policy officials who let him know in blunt terms what worked as effective policy support and what did not.

MacEachin advocated an approach to structured argumentation called “linchpin analysis,” to which he contributed muscular terms designed to overcome many CIA professionals’ distaste for academic nomenclature. The standard academic term “key variables” became drivers. “Hypotheses” concerning drivers became linchpins— assumptions underlying the argument–and these had to be explicitly spelled out. MacEachin also urged that greater attention be paid to analytical processes for alerting policymakers to changes in circumstances that would increase the likelihood of alternative scenarios.

MacEachin thus worked to put in place systematic and transparent standards for determining whether analysts had met their responsibilities for critical thinking. To spread understanding and application of the standards, he mandated creation of workshops on linchpin analysis for managers and production of a series of notes on analytical tradecraft. He also directed that the DI’s performance on tradecraft standards be tracked and that recognition be given to exemplary assessments. Perhaps most ambitious, he saw to it that instruction on standards for analysis was incorporated into a new training course, “Tradecraft 2000.” Nearly all DI managers and analysts attended this course during 1996-97.

Richards Heuer

Dick Heuer was–and is–much less well known within the CIA than Kent, Gates, and MacEachin. He has not received the wide acclaim that Kent enjoyed as the father of professional analysis, and he has lacked the bureaucratic powers that Gates and MacEachin could wield as DDIs. But his impact on the quality of Agency analysis arguably has been at least as important as theirs.

Heuer’s Central Ideas

Dick Heuer’s writings make three fundamental points about the cognitive challenges intelligence analysts face:

  • The mind is poorly “wired” to deal effectively with both inherent uncertainty (the natural fog surrounding complex, indeterminate intelligence issues) and induced uncertainty (the man-made fog fabricated by denial and deception operations).
  • Even increased awareness of cognitive and other “unmotivated” biases, such as the tendency to see information confirming an already-held judgment more vividly than one sees “disconfirming” information, does little by itself to help analysts deal effectively with uncertainty.
  • Tools and techniques that gear the analyst’s mind to apply higher levels of critical thinking can substantially improve analysis on complex issues on which information is incomplete, ambiguous, and often deliberately distorted. Key examples of such intellectual devices include techniques for structuring information, challenging assumptions, and exploring alternative interpretations.

Given the difficulties inherent in the human processing of complex information, a prudent management system should:

  • Encourage products that (a) clearly delineate their assumptions and chains of inference and (b) specify the degree and source of the uncertainty involved in the conclusions.
  • Emphasize procedures that expose and elaborate alternative points of view–analytic debates, devil’s advocates, interdisciplinary brainstorming, competitive analysis, intra-office peer review of production, and elicitation of outside expertise.

Heuer emphasizes both the value and the dangers of mental models, or mind-sets. In the book’s opening chapter, entitled “Thinking About Thinking,” he notes that:

[Analysts] construct their own version of “reality” on the basis of information provided by the senses, but this sensory input is mediated by complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received.

This process may be visualized as perceiving the world through a lens or screen that channels and focuses and thereby may distort the images that are seen. To achieve the clearest possible image…analysts need more than information…They also need to understand the lenses through which this information passes. These lenses are known by many terms–mental models, mind-sets, biases, or analytic assumptions.

In essence, Heuer sees reliance on mental models to simplify and interpret reality as an unavoidable conceptual mechanism for intelligence analysts–often useful, but at times hazardous. What is required of analysts, in his view, is a commitment to challenge, refine, and challenge again their own working mental models, precisely because these steps are central to sound interpretation of complex and ambiguous issues.

Throughout the book, Heuer is critical of the orthodox prescription of “more and better information” to remedy unsatisfactory analytic performance. He urges that greater attention be paid instead to more intensive exploitation of information already on hand, and that in so doing, analysts continuously challenge and revise their mental models.

Heuer sees mirror-imaging as an example of an unavoidable cognitive trap. No matter how much expertise an analyst applies to interpreting the value systems of foreign entities, when the hard evidence runs out the tendency to project the analyst’s own mind-set takes over. In Chapter 4, Heuer observes:

To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often nothing more than partially informed speculation. Too frequently, foreign behavior appears “irrational” or “not in their own best interest.” Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

Recommendations

Heuer’s advice to Agency leaders, managers, and analysts is pointed: To ensure sustained improvement in assessing complex issues, analysis must be treated as more than a substantive and organizational process. Attention also must be paid to techniques and tools for coping with the inherent limitations on analysts’ mental machinery. He urges that Agency leaders take steps to:

  • Establish an organizational environment that promotes and rewards the kind of critical thinking he advocates–or example, analysis on difficult issues that considers in depth a series of plausible hypotheses rather than allowing the first credible hypothesis to suffice.
  • Expand funding for research on the role such mental processes play in shaping analytical judgments. An Agency that relies on sharp cognitive performance by its analysts must stay abreast of studies on how the mind works–i.e., on how analysts reach judgments.
  • Foster development of tools to assist analysts in assessing information. On tough issues, they need help in improving their mental models and in deriving incisive findings from information they already have; they need such help at least as much as they need more information.

I offer some concluding observations and recommendations, rooted in Heuer’s findings and taking into account the tough tradeoffs facing intelligence professionals:

  •  Commit to a uniform set of tradecraft standards based on the insights in this book. Leaders need to know if analysts have done their cognitive homework before taking corporate responsibility for their judgments. Although every analytical issue can be seen as one of a kind, I suspect that nearly all such topics fit into about a dozen recurring patterns of challenge based largely on variations in substantive uncertainty and policy sensitivity. Corporate standards need to be established for each such category. And the burden should be put on managers to explain why a given analytical assignment requires deviation from the standards. I am convinced that if tradecraft standards are made uniform and transparent, the time saved by curtailing personalistic review of quick-turnaround analysis (e.g., “It reads better to me this way”) could be “re-invested” in doing battle more effectively against cognitive pitfalls. (“Regarding point 3, let’s talk about your assumptions.”)
  •  Pay more honor to “doubt.” Intelligence leaders and policymakers should, in recognition of the cognitive impediments to sound analysis, establish ground rules that enable analysts, after doing their best to clarify an issue, to express doubts more openly. They should be encouraged to list gaps in information and other obstacles to confident judgment. Such conclusions as “We do not know” or “There are several potentially valid ways to assess this issue” should be regarded as badges of sound analysis, not as dereliction of analytic duty.

Find a couple of successors to Dick Heuer. Fund their research. Heed their findings.

PART ONE–OUR MENTAL MACHINERY Chapter 1
Thinking About Thinking

Of the diverse problems that impede accurate intelligence analysis, those inherent in human mental processes are surely among the most important and most difficult to deal with. Intelligence analysis is fundamentally a mental process, but understanding this process is hindered by the lack of conscious awareness of the workings of our own minds.

A basic finding of cognitive psychology is that people have no conscious experience of most of what happens in the human mind. Many functions associated with perception, memory, and information processing are conducted prior to and independently of any conscious direction. What appears spontaneously in consciousness is the result of thinking, not the process of thinking.

Weaknesses and biases inherent in human thinking processes can be demonstrated through carefully designed experiments. They can be alleviated by conscious application of tools and techniques that should be in the analytical tradecraft toolkit of all intelligence analysts.

Thinking analytically is a skill like carpentry or driving a car. It can be taught, it can be learned, and it can improve with practice. But like many other skills, such as riding a bike, it is not learned by sitting in a classroom and being told how to do it. Analysts learn by doing. Most people achieve at least a minimally acceptable level of analytical performance with little conscious effort beyond completing their education. With much effort and hard work, however, analysts can achieve a level of excellence beyond what comes naturally.

expert guidance may be required to modify long-established analytical habits to achieve an optimal level of analytical excellence. An analytical coaching staff to help young analysts hone their analytical tradecraft would be a valuable supplement to classroom instruction.

One key to successful learning is motivation. Some of CIA’s best analysts developed their skills as a consequence of experiencing analytical failure early in their careers. Failure motivated them to be more self-conscious about how they do analysis and to sharpen their thinking process.

Part I identifies some limitations inherent in human mental processes. Part II discusses analytical tradecraft–simple tools and approaches for overcoming these limitations and thinking more systematically. Chapter 8, “Analysis of Competing Hypotheses,” is arguably the most important single chapter. Part III presents information about cognitive biases–the technical term for predictable mental errors caused by simplified information processing strategies. A final chapter presents a checklist for analysts and recommendations for how managers of intelligence analysis can help create an environment in which analytical excellence flourishes.

Herbert Simon first advanced the concept of “bounded” or limited rationality.

Because of limits in human mental capacity, he argued, the mind cannot cope directly with the complexity of the world. Rather, we construct a simplified mental model of reality and then work with this model. We behave rationally within the confines of our mental model, but this model is not always well adapted to the requirements of the real world. The concept of bounded rationality has come to be recognized widely, though not universally, both as an accurate portrayal of human judgment and choice and as a sensible adjustment to the limitations inherent in how the human mind functions.

Much psychological research on perception, memory, attention span, and reasoning capacity documents the limitations in our “mental machinery” identified by Simon.

Many scholars have applied these psychological insights to the study of international political behavior. A similar psychological perspective underlies some writings on intelligence failure and strategic surprise.

This book differs from those works in two respects. It analyzes problems from the perspective of intelligence analysts rather than policymakers. And it documents the impact of mental processes largely through experiments in cognitive psychology rather than through examples from diplomatic and military history.

A central focus of this book is to illuminate the role of the observer in determining what is observed and how it is interpreted. People construct their own version of “reality” on the basis of information provided by the senses, but this sensory input is mediated by complex mental processes that determine which information is attended to, how it is organized, and the meaning attributed to it. What people perceive, how readily they perceive it, and how they process this information after receiving it are all strongly influenced by past experience, education, cultural values, role requirements, and organizational norms, as well as by the specifics of the information received.

In this book, the terms mental model and mind-set are used more or less interchangeably, although a mental model is likely to be better developed and articulated than a mind-set. An analytical assumption is one part of a mental model or mind-set. The biases discussed in this book result from how the mind works and are independent of any substantive mental model or mind-set.

Intelligence analysts must understand themselves before they can understand others. Training is needed to (a) increase self-awareness concerning generic problems in how people perceive and make analytical judgments concerning foreign events, and (b) provide guidance and practice in overcoming these problems.

The disadvantage of a mind-set is that it can color and control our perception to the extent that an experienced specialist may be among the last to see what is really happening when events take a new and unexpected turn. When faced with a major paradigm shift, analysts who know the most about a subject have the most to unlearn.

The advantage of mind-sets is that they help analysts get the production out on time and keep things going effectively between those watershed events that become chapter headings in the history books.

What analysts need is more truly useful information–mostly reliable HUMINT from knowledgeable insiders–to help them make good decisions. Or they need a more accurate mental model and better analytical tools to help them sort through, make sense of, and get the most out of the available ambiguous and conflicting information.

Psychological research also offers to intelligence analysts additional insights that are beyond the scope of this book. Problems are not limited to how analysts perceive and process information. Intelligence analysts often work in small groups and always within the context of a large, bureaucratic organization. Problems are inherent in the processes that occur at all three levels–individual, small group, and organization. This book focuses on problems inherent in analysts’ mental processes, inasmuch as these are probably the most insidious. Analysts can observe and get a feel for these problems in small-group and organizational processes, but it is very difficult, at best, to be self-conscious about the workings of one’s own mind.

Chapter 2

Perception: Why Can’t We See What Is There To Be Seen?

The process of perception links people to their environment and is critical to accurate understanding of the world about us. Accurate intelligence analysis obviously requires accurate perception. Yet research into human perception demonstrates that the process is beset by many pitfalls. Moreover, the circumstances under which intelligence analysis is conducted are precisely the circumstances in which accurate perception tends to be most difficult. This chapter discusses perception in general, then applies this information to illuminate some of the difficulties of intelligence analysis.

We tend to perceive what we expect to perceive.

A corollary of this principle is that it takes more information, and more unambiguous information, to recognize an unexpected phenomenon than an expected one.

patterns of expectation become so deeply embedded that they continue to influence perceptions even when people are alerted to and try to take account of the existence of data that do not fit their preconceptions. Trying to be objective does not ensure accurate perception.

Patterns of expectations tell analysts, subconsciously, what to look for, what is important, and how to interpret what is seen. These patterns form a mind-set that predisposes analysts to think in certain ways. A mind-set is akin to a screen or lens through which one perceives the world.

There is a tendency to think of a mind-set as something bad, to be avoided. According to this line of argument, one should have an open mind and be influenced only by the facts rather than by preconceived notions! That is an unreachable ideal. There is no such thing as “the facts of the case.” There is only a very selective subset of the overall mass of data to which one has been subjected that one takes as facts and judges to be relevant to the question at issue.

Actually, mind-sets are neither good nor bad; they are unavoidable. People have no conceivable way of coping with the volume of stimuli that impinge upon their senses, or with the volume and complexity of the data they have to analyze, without some kind of simplifying preconceptions about what to expect, what is important, and what is related to what. “There is a grain of truth in the otherwise pernicious maxim that an open mind is an empty mind.” Analysts do no achieve objective analysis by avoiding preconceptions; that would be ignorance or self-delusion. Objectivity is achieved by making basic assumptions and reasoning as explicit as possible so that they can be challenged by others and analysts can, themselves, examine their validity.

One of the most important characteristics of mind-sets is: Mind-sets tend to be quick to form but resistant to change.

Once an observer has formed an image–that is, once he or she has developed a mind- set or expectation concerning the phenomenon being observed–this conditions future perceptions of that phenomenon.

This is the basis for another general principle of perception: New information is assimilated to existing images.

This principle explains why gradual, evolutionary change often goes unnoticed. It also explains the phenomenon that an intelligence analyst assigned to work on a topic or country for the first time may generate accurate insights that have been overlooked by experienced analysts who have worked on the same problem for 10 years. A fresh perspective is sometimes useful; past experience can handicap as well as aid analysis.

This tendency to assimilate new data into pre-existing images is greater “the more ambiguous the information, the more confident the actor is of the validity of his image, and the greater his commitment to the established view.”

One of the more difficult mental feats is to take a familiar body of data and reorganize it visually or mentally to perceive it from a different perspective. Yet this is what intelligence analysts are constantly required to do. In order to understand international interactions, analysts must understand the situation as it appears to each of the opposing forces, and constantly shift back and forth from one perspective to the other as they try to fathom how each side interprets an ongoing series of interactions. Trying to perceive an adversary’s interpretations of international events, as well as US interpretations of those same events, is comparable to seeing both the old and young woman in Figure 3.

A related point concerns the impact of substandard conditions of perception. The basic principle is:

Initial exposure to blurred or ambiguous stimuli interferes with accurate perception even after more and better information becomes available.

What happened in this experiment is what presumably happens in real life; despite ambiguous stimuli, people form some sort of tentative hypothesis about what they see. The longer they are exposed to this blurred image, the greater confidence they develop in this initial and perhaps erroneous impression, so the greater the impact this initial impression has on subsequent perceptions. For a time, as the picture becomes clearer, there is no obvious contradiction; the new data are assimilated into the previous image, and the initial interpretation is maintained until the contradiction becomes so obvious that it forces itself upon our consciousness.

The early but incorrect impression tends to persist because the amount of information necessary to invalidate a hypothesis is considerably greater than the amount of information required to make an initial interpretation. The problem is not that there is any inherent difficulty in grasping new perceptions or new ideas, but that established perceptions are so difficult to change. People form impressions on the basis of very little information, but once formed, they do not reject or change them unless they obtain rather solid evidence. Analysts might seek to limit the adverse impact of this tendency by suspending judgment for as long as possible as new information is being received.

Implications for Intelligence Analysis

Comprehending the nature of perception has significant implications for understanding the nature and limitations of intelligence analysis. The circumstances under which accurate perception is most difficult are exactly the circumstances under which intelligence analysis is generally conducted–dealing with highly ambiguous situations on the basis of information that is processed incrementally under pressure for early judgment. This is a recipe for inaccurate perception.

Intelligence seeks to illuminate the unknown. Almost by definition, intelligence analysis deals with highly ambiguous situations. As previously noted, the greater the ambiguity of the stimuli, the greater the impact of expectations and pre-existing images on the perception of that stimuli. Thus, despite maximum striving for objectivity, the intelligence analyst’s own preconceptions are likely to exert a greater impact on the analytical product than in other fields where an analyst is working with less ambiguous and less discordant information.

Moreover, the intelligence analyst is among the first to look at new problems at an early stage when the evidence is very fuzzy indeed. The analyst then follows a problem as additional increments of evidence are received and the picture gradually clarifies–as happened with test subjects in the experiment demonstrating that initial exposure to blurred stimuli interferes with accurate perception even after more and better information becomes available. If the results of this experiment can be generalized to apply to intelligence analysts, the experiment suggests that an analyst who starts observing a potential problem situation at an early and unclear stage is at a disadvantage as compared with others, such as policymakers, whose first exposure may come at a later stage when more and better information is available.

The receipt of information in small increments over time also facilitates assimilation of this information into the analyst’s existing views. No one item of information may be sufficient to prompt the analyst to change a previous view. The cumulative message inherent in many pieces of information may be significant but is attenuated when this information is not examined as a whole. The Intelligence Community’s review of its performance before the 1973 Arab-Israeli War noted:

The problem of incremental analysis–especially as it applies to the current intelligence process–was also at work in the period preceding hostilities. Analysts, according to their own accounts, were often proceeding on the basis of the day’s take, hastily comparing it with material received the previous day. They then produced in ‘assembly line fashion’ items which may have reflected perceptive intuition but which [did not] accrue from a systematic consideration of an accumulated body of integrated evidence.

And finally, the intelligence analyst operates in an environment that exerts strong pressures for what psychologists call premature closure. Customer demand for interpretive analysis is greatest within two or three days after an event occurs. The system requires the intelligence analyst to come up with an almost instant diagnosis before sufficient hard information, and the broader background information that may be needed to gain perspective, become available to make possible a well-grounded judgment. This diagnosis can only be based upon the analyst’s preconceptions concerning how and why events normally transpire in a given society.

The problems outlined here have implications for the management as well as the conduct of analysis. Given the difficulties inherent in the human processing of complex information, a prudent management system should:

  • Encourage products that clearly delineate their assumptions and chains of inference and that specify the degree and source of uncertainty involved in the conclusions.
  • Support analyses that periodically re-examine key problems from the ground up in order to avoid the pitfalls of the incremental approach.
  • Emphasize procedures that expose and elaborate alternative points of view.
  • Educate consumers about the limitations as well as the capabilities of intelligence analysis; define a set of realistic expectations as a standard against which to judge analytical performance.

Chapter 3
Memory: How Do We Remember What We Know?

Differences between stronger and weaker analytical performance are attributable in large measure to differences in the organization of data and experience in analysts’ long-term memory. The contents of memory form a continuous input into the analytical process, and anything that influences what information is remembered or retrieved from memory also influences the outcome of analysis.

This chapter discusses the capabilities and limitations of several components of the memory system. Sensory information storage and short-term memory are beset by severe limitations of capacity, while long-term memory, for all practical purposes, has a virtually infinite capacity. With long-term memory, the problems concern getting information into it and retrieving information once it is there, not physical limits on the amount of information that may be stored. Understanding how memory works provides insight into several analytical strengths and weaknesses.

Components of the Memory System

What is commonly called memory is not a single, simple function. It is an extraordinarily complex system of diverse components and processes. There are at least three, and very likely more, distinct memory processes. The most important from the standpoint of this discussion and best documented by scientific research are sensory information storage (SIS), short-term memory (STM), and long-term memory (LTM). Each differs with respect to function, the form of information held, the length of time information is retained, and the amount of information-handling capacity. Memory researchers also posit the existence of an interpretive mechanism and an overall memory monitor or control mechanism that guides interaction among various elements of the memory system.

Sensory Information Storage

Sensory information storage holds sensory images for several tenths of a second after they are received by the sensory organs. The functioning of SIS may be observed if you close your eyes, then open and close them again as rapidly as possible.

Short-Term Memory

Information passes from SIS into short-term memory, where again it is held for only a short period of time–a few seconds or minutes. Whereas SIS holds the complete image, STM stores only the interpretation of the image. If a sentence is spoken, SIS retains the sounds, while STM holds the words formed by these sounds.

Retrieval of information from STM is direct and immediate because the information has never left the conscious mind. Information can be maintained in STM indefinitely by a process of “rehearsal”–repeating it over and over again. But while rehearsing some items to retain them in STM, people cannot simultaneously add new items.

Long-Term Memory

Some information retained in STM is processed into long-term memory. This information on past experiences is filed away in the recesses of the mind and must be retrieved before it can be used. In contrast to the immediate recall of current experience from STM, retrieval of information from LTM is indirect and sometimes laborious.

Loss of detail as sensory stimuli are interpreted and passed from SIS into STM and then into LTM is the basis for the phenomenon of selective perception discussed in the previous chapter. It imposes limits on subsequent stages of analysis, inasmuch as the lost data can never be retrieved. People can never take their mind back to what was actually there in sensory information storage or short-term memory. They can only retrieve their interpretation of what they thought was there as stored in LTM.

There are no practical limits to the amount of information that may be stored in LTM. The limitations of LTM are the difficulty of processing information into it and retrieving information from it.

Despite much research on memory, little agreement exists on many critical points. What is presented here is probably the lowest common denominator on which most researchers would agree.

Organization of Information in Long-Term Memory.

analysts’ needs are best served by a very simple image of the structure of memory.

Imagine memory as a massive, multidimensional spider web. This image captures what is, for the purposes of this book, perhaps the most important property of information stored in memory–its interconnectedness. One thought leads to another. It is possible to start at any one point in memory and follow a perhaps labyrinthine path to reach any other point. Information is retrieved by tracing through the network of interconnections to the place where it is stored.

Retrievability is influenced by the number of locations in which information is stored and the number and strength of pathways from this information to other concepts that might be activated by incoming information. The more frequently a path is followed, the stronger that path becomes and the more readily available the information located along that path. If one has not thought of a subject for some time, it may be difficult to recall details. After thinking our way back into the appropriate context and finding the general location in our memory, the interconnections become more readily available. We begin to remember names, places, and events that had seemed to be forgotten.

Once people have started thinking about a problem one way, the same mental circuits or pathways get activated and strengthened each time they think about it. This facilitates the retrieval of information. These same pathways, however, also become the mental ruts that make it difficult to reorganize the information mentally so as to see it from a different perspective.

One useful concept of memory organization is what some cognitive psychologists call a “schema.” A schema is any pattern of relationships among data stored in memory. It is any set of nodes and links between them in the spider web of memory that hang together so strongly that they can be retrieved and used more or less as a single unit.

For example, a person may have a schema for a bar that when activated immediately makes available in memory knowledge of the properties of a bar and what distinguishes a bar, say, from a tavern. It brings back memories of specific bars that may in turn stimulate memories of thirst, guilt, or other feelings or circumstances. People also have schemata (plural for schema) for abstract concepts such as a socialist economic system and what distinguishes it from a capitalist or communist system. Schemata for phenomena such as success or failure in making an accurate intelligence estimate will include links to those elements of memory that explain typical causes and implications of success or failure. There must also be schemata for processes that link memories of the various steps involved in long division, regression analysis, or simply making inferences from evidence and writing an intelligence report.

Any given point in memory may be connected to many different overlapping schemata. This system is highly complex and not well understood.

It serves the purpose of emphasizing that memory does have structure. It also shows that how knowledge is connected in memory is critically important in determining what information is retrieved in response to any stimulus and how that information is used in reasoning.

Concepts and schemata stored in memory exercise a powerful influence on the formation of perceptions from sensory data.

If information does not fit into what people know, or think they know, they have great difficulty processing it.

The content of schemata in memory is a principal factor distinguishing stronger from weaker analytical ability. This is aptly illustrated by an experiment with chess players. When chess grandmasters and masters and ordinary chess players were given five to 10 seconds to note the position of 20 to 25 chess pieces placed randomly on a chess board, the masters and ordinary players were alike in being able to remember the places of only about six pieces. If the positions of the pieces were taken from an actual game (unknown to the test subjects), however, the grandmasters and masters were usually able to reproduce almost all the positions without error, while the ordinary players were still able to place correctly only a half-dozen pieces.

That the unique ability of the chess masters did not result from a pure feat of memory is indicated by the masters’ inability to perform better than ordinary players in remembering randomly placed positions. Their exceptional performance in remembering positions from actual games stems from their ability to immediately perceive patterns that enable them to process many bits of information together as a single chunk or schema. The chess master has available in long-term memory many schemata that connect individual positions together in coherent patterns. When the position of chess pieces on the board corresponds to a recognized schema, it is very easy for the master to remember not only the positions of the pieces, but the outcomes of previous games in which the pieces were in these positions. Similarly, the unique abilities of the master analyst are attributable to the schemata in long-term memory that enable the analyst to perceive patterns in data that pass undetected by the average observer.

Getting Information Into and Out of Long-Term Memory. It used to be that how well a person learned something was thought to depend upon how long it was kept in short-term memory or the number of times they repeated it to themselves. Research evidence now suggests that neither of these factors plays the critical role. Continuous repetition does not necessarily guarantee that something will be remembered. The key factor in transferring information from short-term to long-term memory is the development of associations between the new information and schemata already available in memory. This, in turn, depends upon two variables: the extent to which the information to be learned relates to an already existing schema, and the level of processing given to the new information.

Depth of processing is the second important variable in determining how well information is retained. Depth of processing refers to the amount of effort and cognitive capacity employed to process information, and the number and strength of associations that are thereby forged between the data to be learned and knowledge already in memory. In experiments to test how well people remember a list of words, test subjects might be asked to perform different tasks that reflect different levels of processing. The following illustrative tasks are listed in order of the depth of mental processing required: say how many letters there are in each word on the list, give a word that rhymes with each word, make a mental image of each word, make up a story that incorporates each word.

It turns out that the greater the depth of processing, the greater the ability to recall words on a list. This result holds true regardless of whether the test subjects are informed in advance that the purpose of the experiment is to test them on their memory. Advising test subjects to expect a test makes almost no difference in their performance, presumably because it only leads them to rehearse the information in short-term memory, which is ineffective as compared with other forms of processing.

There are three ways in which information may be learned or committed to memory: by rote, assimilation, or use of a mnemonic device.

By Rote. Material to be learned is repeated verbally with sufficient frequency that it can later be repeated from memory without use of any memory aids. When information is learned by rote, it forms a separate schema not closely interwoven with previously held knowledge. That is, the mental processing adds little by way of elaboration to the new information, and the new information adds little to the elaboration of existing schemata. Learning by rote is a brute force technique. It seems to be the least efficient way of remembering.

By Assimilation. Information is learned by assimilation when the structure or substance of the information fits into some memory schema already possessed by the learner. The new information is assimilated to or linked to the existing schema and can be retrieved readily by first accessing the existing schema and then reconstructing the new information. Assimilation involves learning by comprehension and is, therefore, a desirable method, but it can only be used to learn information that is somehow related to our previous experience.

By Using A Mnemonic Device. A mnemonic device is any means of organizing or encoding information for the purpose of making it easier to remember. A high school student cramming for a geography test might use the acronym “HOMES” as a device for remembering the first letter of each of the Great Lakes–Huron, Ontario, etc.

Memory and Intelligence Analysis

An analyst’s memory provides continuous input into the analytical process. This input is of two types–additional factual information on historical background and context, and schemata the analyst uses to determine the meaning of newly acquired information. Information from memory may force itself on the analyst’s awareness without any deliberate effort by the analyst to remember; or, recall of the information may require considerable time and strain. In either case, anything that influences what information is remembered or retrieved from memory also influences intelligence analysis.

Judgment is the joint product of the available information and what the analyst brings to the analysis of this information.

Substantive knowledge and analytical experience determine the store of memories and schemata the analyst draws upon to generate and evaluate hypotheses. The key is not a simple ability to recall facts, but the ability to recall patterns that relate facts to each other and to broader concepts–and to employ procedures that facilitate this process.

Stretching the Limits of Working Memory

Limited information is available on what is commonly thought of as “working memory”–the collection of information that an analyst holds in the forefront of the mind as he or she does analysis. The general concept of working memory seems clear from personal introspection.

In writing this chapter, I am very conscious of the constraints on my ability to keep many pieces of information in mind while experimenting with ways to organize this information and seeking words to express my thoughts. To help offset these limits on my working memory, I have accumulated a large number of written notes containing ideas and half-written paragraphs. Only by using such external memory aids am I able to cope with the volume and complexity of the information I want to use.

The recommended technique for coping with this limitation of working memory is called externalizing the problem–getting it out of one’s head and down on paper in some simplified form that shows the main elements of the problem and how they relate to each other.

breaking down a problem into its component parts and then preparing a simple “model” that shows how the parts relate to the whole. When working on a small part of the problem, the model keeps one from losing sight of the whole.

A simple model of an analytical problem facilitates the assimilation of new information into long-term memory; it provides a structure to which bits and pieces of information can be related. The model defines the categories for filing information in memory and retrieving it on demand. In other words, it serves as a mnemonic device that provides the hooks on which to hang information so that it can be found when needed.

“Hardening of the Categories”. Memory processes tend to work with generalized categories. If people do not have an appropriate category for something, they are unlikely to perceive it, store it in memory, or be able to retrieve it from memory later. If categories are drawn incorrectly, people are likely to perceive and remember things inaccurately. When information about phenomena that are different in important respects nonetheless gets stored in memory under a single concept, errors of analysis may result.

“Hardening of the categories” is a common analytical weakness. Fine distinctions among categories and tolerance for ambiguity contribute to more effective analysis.

Things That Influence What Is Remembered. Factors that influence how information is stored in memory and that affect future retrievability include: being the first-stored information on a given topic, the amount of attention focused on the information, the credibility of the information, and the importance attributed to the information at the moment of storage. By influencing the content of memory, all of these factors also influence the outcome of intelligence analysis.

Memory Rarely Changes Retroactively. Analysts often receive new information that should, logically, cause them to reevaluate the credibility or significance of previous information. Ideally, the earlier information should then become either more salient and readily available in memory, or less so. But it does not work that way. Unfortunately, memories are seldom reassessed or reorganized retroactively in response to new information. For example, information that is dismissed as unimportant or irrelevant because it did not fit an analyst’s expectations does not become more memorable even if the analyst changes his or her thinking to the point where the same information, received today, would be recognized as very significant.

Memory Can Handicap as Well as Help

Understanding how memory works provides some insight into the nature of creativity, openness to new information, and breaking mind-sets. All involve spinning new links in the spider web of memory–links among facts, concepts, and schemata that previously were not connected or only weakly connected.

There is, however, a crucial difference between the chess master and the master intelligence analyst. Although the chess master faces a different opponent in each match, the environment in which each contest takes place remains stable and unchanging: the permissible moves of the diverse pieces are rigidly determined, and the rules cannot be changed without the master’s knowledge. Once the chess master develops an accurate schema, there is no need to change it. The intelligence analyst, however, must cope with a rapidly changing world. Many countries that previously were US adversaries are now our formal or de facto allies. The American and Russian governments and societies are not the same today as they were 20 or even 10 or five years ago. Schemata that were valid yesterday may no longer be functional tomorrow.

Learning new schemata often requires the unlearning of existing ones, and this is exceedingly difficult. It is always easier to learn a new habit than to unlearn an old one.

PART II–TOOLS FOR THINKING

Chapter 4

Strategies for Analytical Judgment: Transcending the Limits of Incomplete Information

When intelligence analysts make thoughtful analytical judgments, how do they do it? In seeking answers to this question, this chapter discusses the strengths and limitations of situational logic, theory, comparison, and simple immersion in the data as strategies for the generation and evaluation of hypotheses. The final section discusses alternative strategies for choosing among hypotheses. One strategy too often used by intelligence analysts is described as “satisficing”–choosing the first hypothesis that appears good enough rather than carefully identifying all possible hypotheses and determining which is most consistent with the evidence.

Intelligence analysts should be self-conscious about their reasoning process. They should think about how they make judgments and reach conclusions, not just about the judgments and conclusions themselves.

Judgment is what analysts use to fill gaps in their knowledge. It entails going beyond the available information and is the principal means of coping with uncertainty. It always involves an analytical leap, from the known into the uncertain.

Judgment is an integral part of all intelligence analysis. While the optimal goal of intelligence collection is complete knowledge, this goal is seldom reached in practice. Almost by definition of the intelligence mission, intelligence issues involve considerable uncertainty.

Analytical strategies are important because they influence the data one attends to. They determine where the analyst shines his or her searchlight, and this inevitably affects the outcome of the analytical process.

Strategies for Generating and Evaluating Hypotheses

the goal is to understand the several kinds of careful, conscientious analysis one would hope and expect to find among a cadre of intelligence analysts dealing with highly complex issues.

Situational Logic

This is the most common operating mode for intelligence analysts. Generation and analysis of hypotheses start with consideration of concrete elements of the current situation, rather than with broad generalizations that encompass many similar cases. The situation is regarded as one-of-a-kind, so that it must be understood in terms of its own unique logic, rather than as one example of a broad class of comparable events.

Starting with the known facts of the current situation and an understanding of the unique forces at work at that particular time and place, the analyst seeks to identify
the logical antecedents or consequences of this situation. A scenario is developed that hangs together as a plausible narrative. The analyst may work backwards to explain the origins or causes of the current situation or forward to estimate the future outcome.

Situational logic commonly focuses on tracing cause-effect relationships or, when dealing with purposive behavior, means-ends relationships. The analyst identifies the goals being pursued and explains why the foreign actor(s) believe certain means will achieve those goals.

Particular strengths of situational logic are its wide applicability and ability to integrate a large volume of relevant detail. Any situation, however unique, may be analyzed in this manner.

Situational logic as an analytical strategy also has two principal weaknesses. One is that it is so difficult to understand the mental and bureaucratic processes of foreign leaders and governments. To see the options faced by foreign leaders as these leaders see them, one must understand their values and assumptions and even their misperceptions and misunderstandings. Without such insight, interpreting foreign leaders’ decisions or forecasting future decisions is often little more than partially informed speculation. Too frequently, foreign behavior appears “irrational” or “not in their own best interest.” Such conclusions often indicate analysts have projected American values and conceptual frameworks onto the foreign leaders and societies, rather than understanding the logic of the situation as it appears to them.

The second weakness is that situational logic fails to exploit the theoretical knowledge derived from study of similar phenomena in other countries and other time periods. The subject of national separatist movements illustrates the point. Nationalism is a centuries-old problem, but most Western industrial democracies have been considered well-integrated national communities.

Analyzing many examples of a similar phenomenon, as discussed below, enables one to probe more fundamental causes than those normally considered in logic-of-the- situation analysis. The proximate causes identified by situational logic appear, from the broader perspective of theoretical analysis, to be but symptoms indicating the presence of more fundamental causal factors. A better understanding of these fundamental causes is critical to effective forecasting, especially over the longer range.

Applying Theory

Theory is an academic term not much in vogue in the Intelligence Community, but it is unavoidable in any discussion of analytical judgment. In one popular meaning of the term, “theoretical” is associated with the terms “impractical” and “unrealistic”. Needless to say, it is used here in a quite different sense.

A theory is a generalization based on the study of many examples of some phenomenon. It specifies that when a given set of conditions arises, certain other conditions will follow either with certainty or with some degree of probability. In other words, conclusions are judged to follow from a set of conditions and a finding that these conditions apply in the specific case being analyzed.

What academics refer to as theory is really only a more explicit version of what intelligence analysts think of as their basic understanding of how individuals, institutions, and political systems normally behave.

Theoretical propositions frequently fail to specify the time frame within which developments might be anticipated to occur.

Further elaboration of the theory relating economic development and foreign ideas to political instability in feudal societies would identify early warning indicators that analysts might look for. Such indicators would guide both intelligence collection and analysis of sociopolitical and socioeconomic data and lead to hypotheses concerning when or under what circumstances such an event might occur.

But if theory enables the analyst to transcend the limits of available data, it may also provide the basis for ignoring evidence that is truly indicative of future events.

Figure 4 below illustrates graphically the difference between theory and situational logic. Situational logic looks at the evidence within a single country on multiple interrelated issues, as shown by the column highlighted in gray. This is a typical area studies approach. Theoretical analysis looks at the evidence related to a single issue in multiple countries, as shown by the row highlighted in gray. This is a typical social science approach.

The distinction between theory and situational logic is not as clear as it may seem from this graphic, however. Logic-of-the-situation analysis also draws heavily on theoretical assumptions. How does the analyst select the most significant elements to describe the current situation, or identify the causes or consequences of these elements, without some implicit theory that relates the likelihood of certain outcomes to certain antecedent conditions?

Comparison with Historical Situations

A third approach for going beyond the available information is comparison. An analyst seeks understanding of current events by comparing them with historical precedents in the same country, or with similar events in other countries. Analogy is one form of comparison. When an historical situation is deemed comparable to current circumstances, analysts use their understanding of the historical precedent to fill gaps in their understanding of the current situation. Unknown elements of the present are assumed to be the same as known elements of the historical precedent. Thus, analysts reason that the same forces are at work, that the outcome of the present situation is likely to be similar to the outcome of the historical situation, or that a certain policy is required in order to avoid the same outcome as in the past.

Comparison differs from situational logic in that the present situation is interpreted in the light of a more or less explicit conceptual model that is created by looking at similar situations in other times or places. It differs from theoretical analysis in that this conceptual model is based on a single case or only a few cases, rather than on many similar cases. Comparison may also be used to generate theory, but this is a more narrow kind of theorizing that cannot be validated nearly as well as generalizations inferred from many comparable cases.

Reasoning by comparison is a convenient shortcut, one chosen when neither data nor theory are available for the other analytical strategies, or simply because it is easier and less time-consuming than a more detailed analysis. A careful comparative analysis starts by specifying key elements of the present situation. The analyst then seeks out one or more historical precedents that may shed light on the present. Frequently, however, a historical precedent may be so vivid and powerful that it imposes itself upon a person’s thinking from the outset, conditioning them to perceive the present primarily in terms of its similarity to the past. This is reasoning by analogy. As Robert Jervis noted, “historical analogies often precede, rather than follow, a careful analysis of a situation.”

The tendency to relate contemporary events to earlier events as a guide to understanding is a powerful one. Comparison helps achieve understanding by reducing the unfamiliar to the familiar. In the absence of data required for a full understanding of the current situation, reasoning by comparison may be the only alternative. Anyone taking this approach, however, should be aware of the significant potential for error. This course is an implicit admission of the lack of sufficient information to understand the present situation in its own right, and lack of relevant theory to relate the present situation to many other comparable situations.

In a short book that ought to be familiar to all intelligence analysts, Ernest May traced the impact of historical analogies on U.S. Foreign Policy. He found that because of reasoning by analogy, US policymakers tend to be one generation behind, determined to avoid the mistakes of the previous generation. They pursue the policies that would have been most appropriate in the historical situation but are not necessarily well adapted to the current one.

Communist aggression after World War II was seen as analogous to Nazi aggression, leading to a policy of containment that could have prevented World War II.

May argues that policymakers often perceive problems in terms of analogies with the past, but that they ordinarily use history badly:

When resorting to an analogy, they tend to seize upon the first that comes to mind. They do not research more widely. Nor do they pause to analyze the case, test its fitness, or even ask in what ways it might be misleading.

As compared with policymakers, intelligence analysts have more time available to “analyze rather than analogize.” Intelligence analysts tend to be good historians, with a large number of historical precedents available for recall. The greater the number of potential analogues an analyst has at his or her disposal, the greater the likelihood of selecting an appropriate one. The greater the depth of an analyst’s knowledge, the greater the chances the analyst will perceive the differences as well as the similarities between two situations. Even under the best of circumstances, however, inferences based on comparison with a single analogous situation probably are more prone to error than most other forms of inference.

The most productive uses of comparative analysis are to suggest hypotheses and to highlight differences, not to draw conclusions. Comparison can suggest the presence or the influence of variables that are not readily apparent in the current situation, or stimulate the imagination to conceive explanations or possible outcomes that might not otherwise occur to the analyst. In short, comparison can generate hypotheses that then guide the search for additional information to confirm or refute these hypotheses. It should not, however, form the basis for conclusions unless thorough analysis of both situations has confirmed they are indeed comparable.

Data Immersion

Analysts sometimes describe their work procedure as immersing themselves in the data without fitting the data into any preconceived pattern. At some point an apparent pattern (or answer or explanation) emerges spontaneously, and the analyst then goes back to the data to check how well the data support this judgment. According to this view, objectivity requires the analyst to suppress any personal opinions or preconceptions, so as to be guided only by the “facts” of the case.

To think of analysis in this way overlooks the fact that information cannot speak for itself. The significance of information is always a joint function of the nature of the information and the context in which it is interpreted. The context is provided by the analyst in the form of a set of assumptions and expectations concerning human and organizational behavior. These preconceptions are critical determinants of which information is considered relevant and how it is interpreted.

Analysis begins when the analyst consciously inserts himself or herself into the process to select, sort, and organize information. This selection and organization can only be accomplished according to conscious or subconscious assumptions and preconceptions.

In research to determine how physicians make medical diagnoses, the doctors who comprised the test subjects were asked to describe their analytical strategies. Those who stressed thorough collection of data as

their principal analytical method were significantly less accurate in their diagnoses than those who described themselves as following other analytical strategies such as identifying and testing hypotheses.

Relationships Among Strategies

No one strategy is necessarily better than the others. In order to generate all relevant hypotheses and make maximum use of all potentially relevant information, it would be desirable to employ all three strategies at the early hypothesis generation phase of a research project. Unfortunately, analysts commonly lack the inclination or time to do so.

Differences in analytical strategy may cause fundamental differences in perspective between intelligence analysts and some of the policymakers for whom they write. Higher level officials who are not experts on the subject at issue use far more theory and comparison and less situational logic than intelligence analysts. Any policymaker or other senior manager who lacks the knowledge base of the specialist and does not have time for detail must, of necessity, deal with broad generalizations. Many decisions must be made, with much less time to consider each of them than is available to the intelligence analyst. This requires the policymaker to take a more conceptual approach, to think in terms of theories, models, or analogies that summarize large amounts of detail. Whether this represents sophistication or oversimplification depends upon the individual case and, perhaps, whether one agrees lead to increased diagnostic accuracy.

or disagrees with the judgments made. In any event, intelligence analysts would do well to take this phenomenon into account when writing for their consumers.

Strategies for Choice Among Hypotheses

A systematic analytical process requires selection among alternative hypotheses, and it is here that analytical practice often diverges significantly from the ideal and from the canons of scientific method. The ideal is to generate a full set of hypotheses, systematically evaluate each hypothesis, and then identify the hypothesis that provides the best fit to the data.

In practice, other strategies are commonly employed. Alexander George has identified a number of less-than-optimal strategies for making decisions in the face of incomplete information and multiple, competing values and goals. While George conceived of these strategies as applicable to how decisionmakers choose among alternative policies, most also apply to how intelligence analysts might decide among alternative analytical hypotheses.

The relevant strategies George identified are:

  • “Satisficing”–selecting the first identified alternative that appears “good enough” rather than examining all alternatives to determine which is “best.”
  • Incrementalism–focusing on a narrow range of alternatives representing marginal change, without considering the need for dramatic change from an existing position.
  • Consensus–opting for the alternative that will elicit the greatest agreement and support. Simply telling the boss what he or she wants to hear is one version of this.
  • Reasoning by analogy–choosing the alternative that appears most likely to avoid some previous error or to duplicate a previous success.
  • Relying on a set of principles or maxims that distinguish a “good” from a “bad” alternative.

“Satisficing”

I would suggest, based on personal experience and discussions with analysts, that most analysis is conducted in a manner very similar to the satisficing mode (selecting the first identified alternative that appears “good enough”). The analyst identifies what appears to be the most likely hypothesis–that is, the tentative estimate, explanation, or description of the situation that appears most accurate.

This approach has three weaknesses: the selective perception that results from focus on a single hypothesis, failure to generate a complete set of competing hypotheses, and a focus on evidence that confirms rather than disconfirms hypotheses. Each of these is discussed below.

Selective Perception. Tentative hypotheses serve a useful function in helping analysts select, organize, and manage information. They narrow the scope of the problem so that the analyst can focus efficiently on data that are most relevant and important. The hypotheses serve as organizing frameworks in working memory and thus facilitate retrieval of information from memory. In short, they are essential elements of the analytical process. But their functional utility also entails some cost, because a hypothesis functions as a perceptual filter. Analysts, like people in general, tend to see what they are looking for and to overlook that which is not specifically included in their search strategy. They tend to limit the processed information to that which is relevant to the current hypothesis. If the hypothesis is incorrect, information may be lost that would suggest a new or modified hypothesis.

This difficulty can be overcome by the simultaneous consideration of multiple hypotheses.

It has the advantage of focusing attention on those few items of evidence that have the greatest diagnostic value in distinguishing among the validity of competing hypotheses. Most evidence is consistent with several different hypotheses, and this fact is easily overlooked when analysts focus on only one hypothesis at a time–especially if their focus is on seeking to confirm rather than disprove what appears to be the most likely answer.

Failure To Generate Appropriate Hypotheses. If tentative hypotheses determine the criteria for searching for information and judging its relevance, it follows that one may overlook the proper answer if it is not encompassed within the several hypotheses being considered. Research on hypothesis generation suggests that performance on this task is woefully inadequate.

Analysts need to take more time to develop a full set of competing hypotheses, using all three of the previously discussed strategies–theory, situational logic, and comparison.

Failure To Consider Diagnosticity of Evidence. In the absence of a complete set of alternative hypotheses, it is not possible to evaluate the “diagnosticity” of evidence. Unfortunately, many analysts are unfamiliar with the concept of diagnosticity of evidence. It refers to the extent to which any item of evidence helps the analyst determine the relative likelihood of alternative hypotheses.

Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of the various hypotheses. If an item of evidence seems consistent with all the hypotheses, it may have no diagnostic value at all. It is a common experience to discover that most available evidence really is not very helpful, as it can be reconciled with all the hypotheses.

Failure To Reject Hypotheses

Scientific method is based on the principle of rejecting hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. Intuitive analysis, by comparison, generally concentrates on confirming a hypothesis and commonly accords more weight to evidence supporting a hypothesis than to evidence that weakens it. Ideally, the reverse would be true. While analysts usually cannot apply the statistical procedures of scientific methodology to test their hypotheses, they can and should adopt the conceptual strategy of seeking to refute rather than confirm hypotheses.

Failure To Reject Hypotheses

Scientific method is based on the principle of rejecting hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. Intuitive analysis, by comparison, generally concentrates on confirming a hypothesis and commonly accords more weight to evidence supporting a hypothesis than to evidence that weakens it. Ideally, the reverse would be true. While analysts usually cannot apply the statistical procedures of scientific methodology to test their hypotheses, they can and should adopt the conceptual strategy of seeking to refute rather than confirm hypotheses.

There are two aspects to this problem: people do not naturally seek disconfirming evidence, and when such evidence is received it tends to be discounted. If there is any question about the former, consider how often people test their political and religious beliefs by reading newspapers and books representing an opposing viewpoint.

Apart from the psychological pitfalls involved in seeking confirmatory evidence, an important logical point also needs to be considered. The logical reasoning underlying the scientific method of rejecting hypotheses is that “…no confirming instance of a law is a verifying instance, but that any disconfirming instance is a falsifying instance”.

In other words, a hypothesis can never be proved by the enumeration of even a large body of evidence consistent with that hypothesis, because the same body of evidence may also be consistent with other hypotheses. A hypothesis may be disproved, however, by citing a single item of evidence that is incompatible with it.

the validity of a hypothesis can only be tested by seeking to disprove it rather than confirm it.

Consider lists of early warning indicators, for example. They are designed to be indicative of an impending attack. Very many of them, however, are also consistent with the hypothesis that military movements are a bluff to exert diplomatic pressure and that no military action will be forthcoming. When analysts seize upon only one of these hypotheses and seek evidence to confirm it, they will often be led astray.

The evidence available to the intelligence analyst is in one important sense different from the evidence available to test subjects asked to infer the number sequence rule. The intelligence analyst commonly deals with problems in which the evidence has only a probabilistic relationship to the hypotheses being considered. Thus it is seldom possible to eliminate any hypothesis entirely, because the most one can say is that a given hypothesis is unlikely given the nature of the evidence, not that it is impossible.

This weakens the conclusions that can be drawn from a strategy aimed at eliminating hypotheses, but it does not in any way justify a strategy aimed at confirming them.

Conclusion

There are many detailed assessments of intelligence failures, but few comparable descriptions of intelligence successes. In reviewing the literature on intelligence successes, Frank Stech found many examples of success but only three accounts that provide sufficient methodological details to shed light on the intellectual processes and methods that contributed to the successes.

Chapter 5
Do You Really Need More Information?

The difficulties associated with intelligence analysis are often attributed to the inadequacy of available information. Thus the US Intelligence Community invests heavily in improved intelligence collection systems while managers of analysis lament the comparatively small sums devoted to enhancing analytical resources, improving analytical methods, or gaining better understanding of the cognitive processes involved in making analytical judgments. This chapter questions the often- implicit assumption that lack of information is the principal obstacle to accurate intelligence judgements.

Using experts in a variety of fields as test subjects, experimental psychologists have examined the relationship between the amount of information available to the experts, the accuracy of judgments they make based on this information, and the experts’

intelligence judgments.

confidence in the accuracy of these judgments. The word “information,” as used in this context, refers to the totality of material an analyst has available to work with in making a judgment.

Key findings from this research are:

  • Once an experienced analyst has the minimum information necessary to make an informed judgment, obtaining additional information generally does not improve the accuracy of his or her estimates. Additional information does, however, lead the analyst to become more confident in the judgment, to the point of overconfidence.
  • Experienced analysts have an imperfect understanding of what information they actually use in making judgments. They are unaware of the extent to which their judgments are determined by a few dominant factors, rather than by the systematic integration of all available information. Analysts actually use much less of the available information than they think they do.

To interpret the disturbing but not surprising findings from these experiments, it is necessary to consider four different types of information and discuss their relative value in contributing to the accuracy of analytical judgments. It is also helpful to distinguish analysis in which results are driven by the data from analysis that is driven by the conceptual framework employed to interpret the data.

Understanding the complex relationship between amount of information and accuracy of judgment has implications for both the management and conduct of intelligence analysis. Such an understanding suggests analytical procedures and management initiatives that may indeed contribute to more accurate analytical judgments. It also suggests that resources needed to attain a better understanding of the entire analytical process might profitably be diverted from some of the more costly intelligence collection programs.

Modeling Expert Judgment

Another significant question concerns the extent to which analysts possess an accurate understanding of their own mental processes. How good is their insight into how they actually weight evidence in making judgments? For each situation to be analyzed, they have an implicit “mental model” consisting of beliefs and assumptions as to which variables are most important and how they are related to each other. If analysts have good insight into their own mental model, they should be able to identify and describe the variables they have considered most important in making judgments.

There is strong experimental evidence, however, that such self-insight is usually faulty. The expert perceives his or her own judgmental process, including the number of different kinds of information taken into account, as being considerably more complex than is in fact the case. Experts overestimate the importance of factors that have only a minor impact on their judgment and underestimate the extent to which their decisions are based on a few major variables. In short, people’s mental models are simpler than they think, and the analyst is typically unaware not only of which variables should have the greatest influence, but also which variables actually are having the greatest influence.

When Does New Information Affect Our Judgment?

To evaluate the relevance and significance of these experimental findings in the context of intelligence analysts’ experiences, it is necessary to distinguish four types of additional information that an analyst might receive:

  • Additional detail about variables already included in the analysis: Much raw intelligence reporting falls into this category. One would not expect such supplementary information to affect the overall accuracy of the analyst’s judgment, and it is readily understandable that further detail that is consistent with previous information increases the analyst’s confidence. Analyses for which considerable depth of detail is available to support the conclusions tend to be more persuasive to their authors as well as to their readers.
  • Identification of additional variables: Information on additional variables permits the analyst to take into account other factors that may affect the situation. This is the kind of additional information used in the horserace handicapper experiment. Other experiments have employed some combination of additional variables and additional detail on the same variables. The finding that judgments are based on a few critical variables rather than on the entire spectrum of evidence helps to explain why information on additional variables does not normally improve predictive accuracy. Occasionally, in situations when there are known gaps in an analyst’s understanding, a single report concerning some new and previously unconsidered factor–for example, an authoritative report on a policy decision or planned coup d’etat–will have a major impact on the analyst’s judgment. Such a report would fall into one of the next two categories of new information.
  • Information concerning the value attributed to variables already included in the analysis: An example of such information would be the horserace handicapper learning that a horse he thought would carry 110 pounds will actually carry only 106. Current intelligence reporting tends to deal with this kind of information; for example, an analyst may learn that a dissident group is stronger than had been anticipated. New facts affect the accuracy of judgments when they deal with changes in variables that are critical to the estimates. Analysts’ confidence in judgments based on such information is influenced by their confidence in the accuracy of the information as well as by the amount of information.
  • Information concerning which variables are most important and how they relate to each other: Knowledge and assumptions as to which variables are most important and how they are interrelated comprise the mental model that tells the analyst how to analyze the data received. Explicit investigation of such relationships is one factor that distinguishes systematic research from current intelligence reporting and raw intelligence. In the context of the horserace handicapper experiment, for example, handicappers had to select which variables to include in their analysis. Is weight carried by a horse more, or less, important than several other variables that affect a horse’s performance? Any information that affects this judgment influences how the handicapper analyzes the available data; that is, it affects his mental model.

The accuracy of an analyst’s judgment depends upon both the accuracy of our mental model (the fourth type of information discussed above) and the accuracy of the values attributed to the key variables in the model (the third type of information discussed above). Additional detail on variables already in the analyst’s mental model and information on other variables that do not in fact have a significant influence on our judgment (the first and second types of information) have a negligible impact on accuracy, but form the bulk of the raw material analysts work with. These kinds of information increase confidence because the conclusions seem to be supported by such a large body of data.

This discussion of types of new information is the basis for distinguishing two types of analysis- data-driven analysis and conceptually-driven analysis.

Data-Driven Analysis

In this type of analysis, accuracy depends primarily upon the accuracy and completeness of the available data. If one makes the reasonable assumption that the analytical model is correct and the further assumption that the analyst properly applies this model to the data, then the accuracy of the analytical judgment depends entirely upon the accuracy and completeness of the data.

Analyzing the combat readiness of a military division is an example of data-driven analysis. In analyzing combat readiness, the rules and procedures to be followed are relatively well established.

Most elements of the mental model can be made explicit so that other analysts may be taught to understand and follow the same analytical procedures and arrive at the same or similar results. There is broad, though not necessarily universal, agreement on what the appropriate model is. There are relatively objective standards for judging the quality of analysis, inasmuch as the conclusions follow logically from the application of the agreed-upon model to the available data.

Conceptually Driven Analysis

Conceptually driven analysis is at the opposite end of the spectrum from data-driven analysis. The questions to be answered do not have neat boundaries, and there are many unknowns. The number of potentially relevant variables and the diverse and imperfectly understood relationships among these variables involve the analyst in enormous complexity and uncertainty.

In the absence of any agreed-upon analytical schema, analysts are left to their own devices. They interpret information with the aid of mental models that are largely implicit rather than explicit. Assumptions concerning political forces and processes in the subject country may not be apparent even to the analyst. Such models are not representative of an analytical consensus. Other analysts examining the same data may well reach different conclusions, or reach the same conclusions but for different reasons. This analysis is conceptually driven, because the outcome depends at least as much upon the conceptual framework employed to analyze the data as it does upon the data itself.

To illustrate further the distinction between data-driven and conceptually driven analysis, it is useful to consider the function of the analyst responsible for current intelligence, especially current political intelligence as distinct from longer term research. The daily routine is driven by the incoming wire service news, embassy cables, and clandestine-source reporting from overseas that must be interpreted for dissemination to consumers throughout the Intelligence Community. Although current intelligence reporting is driven by incoming information, this is not what is meant by data-driven analysis. On the contrary, the current intelligence analyst’s task is often extremely concept-driven. The analyst must provide immediate interpretation of the latest, often unexpected events. Apart from his or her store of background information, the analyst may have no data other than the initial, usually incomplete report. Under these circumstances, interpretation is based upon an implicit mental model of how and why events normally transpire in the country for which the analyst is responsible. Accuracy of judgment depends almost exclusively upon accuracy of the mental model, for there is little other basis for judgment.

Partly because of the nature of human perception and information-processing, beliefs of all types tend to resist change. This is especially true of the implicit assumptions and supposedly self-evident truths that play an important role in forming mental models. Analysts are often surprised to learn that what are to them self-evident truths are by no means self-evident to others, or that self-evident truth at one point in time may be commonly regarded as uninformed assumption 10 years later.

Information that is consistent with an existing mind-set is perceived and processed easily and reinforces existing beliefs. Because the mind strives instinctively for consistency, information that is inconsistent with an existing mental image tends to be overlooked, perceived in a distorted manner, or rationalized to fit existing assumptions and beliefs.

Mosaic Theory of Analysis

Understanding of the analytic process has been distorted by the mosaic metaphor commonly used to describe it. According to the mosaic theory of intelligence, small pieces of information are collected that, when put together like a mosaic or jigsaw puzzle, eventually enable analysts to perceive a clear picture of reality. The analogy suggests that accurate estimates depend primarily upon having all the pieces, that is, upon accurate and relatively complete information. It is important to collect and store the small pieces of information, as these are the raw material from which the picture is made; one never knows when it will be possible for an astute analyst to fit a piece into the puzzle. Part of the rationale for large technical intelligence collection systems is rooted in this mosaic theory.

Insights from cognitive psychology suggest that intelligence analysts do not work this way and that the most difficult analytical tasks cannot be approached in this manner. Analysts commonly find pieces that appear to fit many different pictures. Instead of a picture emerging from putting all the pieces together, analysts typically form a picture first and then select the pieces to fit. Accurate estimates depend at least as much upon the mental model used in forming the picture as upon the number of pieces of the puzzle that have been collected.

While analysis and collection are both important, the medical analogy attributes more value to analysis and less to collection than the mosaic metaphor.

Conclusions

To the leaders and managers of intelligence who seek an improved intelligence product, these findings offer a reminder that this goal can be achieved by improving analysis as well as collection. There appear to be inherent practical limits on how much can be gained by efforts to improve collection. By contrast, an open and fertile field exists for imaginative efforts to improve analysis.

These efforts should focus on improving the mental models employed by analysts to interpret information and the analytical processes used to evaluate it. While this will be difficult to achieve, it is so critical to effective intelligence analysis that even small improvements could have large benefits. Specific recommendations are included the next three chapters and in Chapter 14, “Improving Intelligence Analysis.”

Chapter 6

Keeping an Open Mind

Minds are like parachutes. They only function when they are open. After reviewing how and why thinking gets channeled into mental ruts, this chapter looks at mental tools to help analysts keep an open mind, question assumptions, see different perspectives, develop new ideas, and recognize when it is time to change their minds.

A new idea is the beginning, not the end, of the creative process. It must jump over many hurdles before being embraced as an organizational product or solution. The organizational climate plays a crucial role in determining whether new ideas bubble to the surface or are suppressed.

                         *******************

Major intelligence failures are usually caused by failures of analysis, not failures of collection. Relevant information is discounted, misinterpreted, ignored, rejected, or overlooked because it fails to fit a prevailing mental model or mind-set. The “signals” are lost in the “noise.”

A mind-set is neither good nor bad. It is unavoidable. It is, in essence, a distillation of all that analysts think they know about a subject. It forms a lens through which they perceive the world, and once formed, it resists change.

Understanding Mental Ruts

Chapter 3 on memory suggested thinking of information in memory as somehow interconnected like a massive, multidimensional spider web. It is possible to connect any point within this web to any other point. When analysts connect the same points frequently, they form a path that makes it easier to take that route in the future. Once they start thinking along certain channels, they tend to continue thinking the same way and the path may become a rut.

Talking about breaking mind-sets, or creativity, or even just openness to new information is really talking about spinning new links and new paths through the web of memory. These are links among facts and concepts, or between schemata for organizing facts or concepts, that were not directly connected or only weakly connected before.

Problem-Solving Exercise

intelligence analysis is too often limited by similar, unconscious, self-imposed constraints or “cages of the mind.”

You do not need to be constrained by conventional wisdom. It is often wrong. You do not necessarily need to be constrained by existing policies. They can sometimes be changed if you show a good reason for doing so. You do not necessarily need to be constrained by the specific analytical requirement you were given. The policymaker who originated the requirement may not have thought through his or her needs or the requirement may be somewhat garbled as it passes down through several echelons to you to do the work. You may have a better understanding than the policymaker of what he or she needs, or should have, or what is possible to do. You should not hesitate to go back up the chain of command with a suggestion for doing something a little different than what was asked for.

Mental Tools

People use various physical tools such as a hammer and saw to enhance their capacity to perform various physical tasks. People can also use simple mental tools to enhance their ability to perform mental tasks. These tools help overcome limitations in human mental machinery for perception, memory, and inference.

Questioning Assumptions

It is a truism that analysts need to question their assumptions. Experience tells us that when analytical judgments turn out to be wrong, it usually was not because the information was wrong. It was because an analyst made one or more faulty assumptions that went unchallenged.

Sensitivity Analysis. One approach is to do an informal sensitivity analysis. How sensitive is the ultimate judgment to changes in any of the major variables or driving forces in the analysis? Those linchpin assumptions that drive the analysis are the ones that need to be questioned. Analysts should ask themselves what could happen to make any of these assumptions out of date, and how they can know this has not already happened. They should try to disprove their assumptions rather than confirm them. If an analyst cannot think of anything that would cause a change of mind, his or her mind-set may be so deeply entrenched that the analyst cannot see the conflicting evidence. One advantage of the competing hypotheses approach discussed in Chapter 8 is that it helps identify the linchpin assumptions that swing a conclusion in one direction or another.

Identify Alternative Models. Analysts should try to identify alternative models, conceptual frameworks, or interpretations of the data by seeking out individuals who disagree with them rather than those who agree. Most people do not do that very often. It is much more comfortable to talk with people in one’s own office who share the same basic mind-set. There are a few things that can be done as a matter of policy, and that have been done in some offices in the past, to help overcome this tendency.

At least one Directorate of Intelligence component, for example, has had a peer review process in which none of the reviewers was from the branch that produced the report. The rationale for this was that an analyst’s immediate colleagues and supervisor(s) are likely to share a common mind-set. Hence these are the individuals least likely to raise fundamental issues challenging the validity of the analysis. To

avoid this mind-set problem, each research report was reviewed by a committee of three analysts from other branches handling other countries or issues. None of them had specialized knowledge of the subject. They were, however, highly accomplished analysts. Precisely because they had not been immersed in the issue in question, they were better able to identify hidden assumptions and other alternatives, and to judge whether the analysis adequately supported the conclusions.

Be Wary of Mirror Images. One kind of assumption an analyst should always recognize and question is mirror-imaging–filling gaps in the analyst’s own knowledge by assuming that the other side is likely to act in a certain way because that is how the US would act under similar circumstances. To say, “if I were a Russian intelligence officer …” or “if I were running the Indian Government …” is mirror-imaging. Analysts may have to do that when they do not know how the Russian intelligence officer or the Indian Government is really thinking. But mirror-imaging leads to dangerous assumptions, because people in other cultures do not think the way we do.

Failure to understand that others perceive their national interests differently from the way we perceive those interests is a constant source of problems in intelligence analysis.

Seeing Different Perspectives

Another problem area is looking at familiar data from a different perspective. If you play chess, you know you can see your own options pretty well. It is much more difficult to see all the pieces on the board as your opponent sees them, and to anticipate how your opponent will react to your move. That is the situation analysts are in when they try to see how the US Government’s actions look from another country’s perspective. Analysts constantly have to move back and forth, first seeing the situation from the US perspective and then from the other country’s perspective. This is difficult to do

Thinking Backwards. One technique for exploring new ground is thinking backwards. As an intellectual exercise, start with an assumption that some event you did not expect has actually occurred. Then, put yourself into the future, looking back to explain how this could have happened. Think what must have happened six months or a year earlier to set the stage for that outcome, what must have happened six months or a year before that to prepare the way, and so on back to the present.

Crystal Ball. The crystal ball approach works in much the same way as thinking 71

backwards. Imagine that a “perfect” intelligence source (such as a crystal ball) has told you a certain assumption is wrong. You must then develop a scenario to explain how this could be true. If you can develop a plausible scenario, this suggests your assumption is open to some question.

Role playing. Role playing is commonly used to overcome constraints and inhibitions that limit the range of one’s thinking. Playing a role changes “where you sit.” It also gives one license to think and act differently. Simply trying to imagine how another leader or country will think and react, which analysts do frequently, is not role playing. One must actually act out the role and become, in a sense, the person whose role is assumed. It is only “living” the role that breaks an analyst’s normal mental set and permits him or her to relate facts and ideas to each other in ways that differ from habitual patterns. An analyst cannot be expected to do this alone; some group interaction is required, with different analysts playing different roles, usually in the context of an organized simulation or game.

Just one notional intelligence report is sufficient to start the action in the game. In my experience, it is possible to have a useful political game in just one day with almost no investment in preparatory work.

Devil’s Advocate. A devil’s advocate is someone who defends a minority point of view. He or she may not necessarily agree with that view, but may choose or be assigned to represent it as strenuously as possible. The goal is to expose conflicting interpretations and show how alternative assumptions and images make the world look different. It often requires time, energy, and commitment to see how the world looks from a different perspective.

Imagine that you are the boss at a US facility overseas and are worried about the possibility of a terrorist attack. A standard staff response would be to review existing measures and judge their adequacy. There might well be pressure–subtle or otherwise–from those responsible for such arrangements to find them satisfactory. An alternative or supplementary approach would be to name an individual or small group as a devil’s advocate assigned to develop actual plans for launching such an attack. The assignment to think like a terrorist liberates the designated person(s) to think unconventionally and be less inhibited about finding weaknesses in the system that might embarrass colleagues, because uncovering any such weaknesses is the assigned task.

Recognizing When To Change Your Mind

As a general rule, people are too slow to change an established view, as opposed to being too willing to change. The human mind is conservative. It resists change. Assumptions that worked well in the past continue to be applied to new situations long after they have become outmoded.

Learning from Surprise. A study of senior managers in industry identified how some successful managers counteract this conservative bent. They do it, according to the study, looks from a different perspective.

By paying attention to their feelings of surprise when a particular fact does not fit their prior understanding, and then by highlighting rather than denying the novelty. Although surprise made them feel uncomfortable, it made them take the cause [of the surprise] seriously and inquire into it….Rather than deny, downplay, or ignore disconfirmation [of their prior view], successful senior managers often treat it as friendly and in a way cherish the discomfort surprise creates. As a result, these managers often perceive novel situations early on and in a frame of mind relatively undistorted by hidebound notions.

Analysts should keep a record of unexpected events and think hard about what they might mean, not disregard them or explain them away. It is important to consider whether these surprises, however small, are consistent with some alternative hypothesis. One unexpected event may be easy to disregard, but a pattern of surprises may be the first clue that your understanding of what is happening requires some adjustment, is at best incomplete, and may be quite wrong.

Strategic Assumptions vs. Tactical Indicators. Abraham Ben-Zvi analyzed five cases of intelligence failure to foresee a surprise attack. He made a useful distinction between estimates based on strategic assumptions and estimates based on tactical indications.

Tactical indicators are specific reports of preparations or intent to initiate hostile action or, in the recent Indian case, reports of preparations for a nuclear test.

tactical indicators should be given increased weight in the decisionmaking process.

At a minimum, the emergence of tactical indicators that contradict our strategic assumption should trigger a higher level of intelligence alert. It may indicate that a bigger surprise is on the way.

Stimulating Creative Thinking

Imagination and creativity play important roles in intelligence analysis as in most other human endeavors. Intelligence judgments require the ability to imagine possible causes and outcomes of a current situation. All possible outcomes are not given. The analyst must think of them by imagining scenarios that explicate how they might come about. Similarly, imagination as well as knowledge is required to reconstruct how a problem appears from the viewpoint of a foreign government. Creativity is required to question things that have long been taken for granted. The fact that apples fall from trees was well known to everyone. Newton’s creative genius was to ask “why?” Intelligence analysts, too, are expected to raise new questions that lead to the identification of previously unrecognized relationships or to possible outcomes that had not previously been foreseen.

A creative analytical product shows a flair for devising imaginative or innovative– but also accurate and effective–ways to fulfill any of the major requirements of analysis: gathering information, analyzing information, documenting evidence, and/or presenting conclusions. Tapping unusual sources of data, asking new questions, applying unusual analytic methods, and developing new types of products or new ways of fitting analysis to the needs of consumers are all examples of creative activity.

The old view that creativity is something one is born with, and that it cannot be taught or developed, is largely untrue. While native talent, per se, is important and may be immutable, it is possible to learn to employ one’s innate talents more productively. With understanding, practice, and conscious effort, analysts can learn to produce more imaginative, innovative, creative work.

There is a large body of literature on creativity and how to stimulate it. At least a half- dozen different methods have been developed for teaching, facilitating, or liberating creative thinking. All the methods for teaching or facilitating creativity are based on the assumption that the process of thinking can be separated from the content of thought. One learns mental strategies that can be applied to any subject.

It is not our purpose here to review commercially available programs for enhancing creativity. Such programmatic approaches can be applied more meaningfully to problems of new product development, advertising, or management than to intelligence analysis. It is relevant, however, to discuss several key principles and techniques that these programs have in common, and that individual intelligence analysts or groups of analysts can apply in their work.

Intelligence analysts must generate ideas concerning potential causes or explanations of events, policies that might be pursued or actions taken by a foreign government, possible outcomes of an existing situation, and variables that will influence which outcome actually comes to pass. Analysts also need help to jog them out of mental ruts, to stimulate their memories and imaginations, and to perceive familiar events from a new perspective.

Deferred Judgment. The principle of deferred judgment is undoubtedly the most important. The idea-generation phase of analysis should be separated from the idea- evaluation phase, with evaluation deferred until all possible ideas have been brought out. This approach runs contrary to the normal procedure of thinking of ideas and evaluating them concurrently. Stimulating the imagination and critical thinking are both important, but they do not mix well. A judgmental attitude dampens the imagination, whether it manifests itself as self-censorship of one’s own ideas or fear of critical evaluation by colleagues or supervisors. Idea generation should be a freewheeling, unconstrained, uncritical process.

New ideas are, by definition, unconventional, and therefore likely to be suppressed, either consciously or unconsciously, unless they are born in a secure and protected environment. Critical judgment should be suspended until after the idea-generation stage of analysis has been completed. A series of ideas should be written down and then evaluated later. This applies to idea searching by individuals as well as brainstorming in a group. Get all the ideas out on the table before evaluating any of them.

Quantity Leads to Quality. A second principle is that quantity of ideas eventually leads to quality. This is based on the assumption that the first ideas that come to mind will be those that are most common or usual. It is necessary to run through these conventional ideas before arriving at original or different ones. People have habitual ways of thinking, ways that they continue to use because they have seemed successful in the past. It may well be that these habitual responses, the ones that come first to mind, are the best responses and that further search is unnecessary. In looking for usable new ideas, however, one should seek to generate as many ideas as possible before evaluating any of them.

No Self-Imposed Constraints. A third principle is that thinking should be allowed– indeed encouraged–to range as freely as possible. It is necessary to free oneself from self-imposed constraints, whether they stem from analytical habit, limited perspective, social norms, emotional blocks, or whatever.

Cross-Fertilization of Ideas. A fourth principle of creative problem-solving is that cross-fertilization of ideas is important and necessary. Ideas should be combined with each other to form more and even better ideas. If creative thinking involves forging new links between previously unrelated or weakly related concepts, then creativity will be stimulated by any activity that brings more concepts into juxtaposition with each other in fresh ways. Interaction with other analysts is one basic mechanism for this. As a general rule, people generate more creative ideas when teamed up with others; they help to build and develop each other’s ideas. Personal interaction stimulates new associations between ideas. It also induces greater effort and helps maintain concentration on the task.

These favorable comments on group processes are not meant to encompass standard committee meetings or coordination processes that force consensus based on the lowest common denominator of agreement. My positive words about group interaction apply primarily to brainstorming sessions aimed at generating new ideas and in which, according to the first principle discussed above, all criticism and evaluation are deferred until after the idea generation stage is completed.

Thinking things out alone also has its advantages: individual thought tends to be more structured and systematic than interaction within a group. Optimal results come from alternating between individual thinking and team effort, using group interaction to generate ideas that supplement individual thought. A diverse group is clearly preferable to a homogeneous one. Some group participants should be analysts who are not close to the problem, inasmuch as their ideas are more likely to reflect different insights.

Idea Evaluation. All creativity techniques are concerned with stimulating the flow of ideas. There are no comparable techniques for determining which ideas are best. The procedures are, therefore, aimed at idea generation rather than idea evaluation. The same procedures do aid in evaluation, however, in the sense that ability to generate more alternatives helps one see more potential consequences, repercussions, and effects that any single idea or action might entail.

Organizational Environment

A new idea is not the end product of the creative process. Rather, it is the beginning of what is sometimes a long and tortuous process of translating an idea into an innovative product. The idea must be developed, evaluated, and communicated to others, and this process is influenced by the organizational setting in which it transpires. The potentially useful new idea must pass over a number of hurdles before it is embraced as an organizational product.

Organizational Environment

A new idea is not the end product of the creative process. Rather, it is the beginning of what is sometimes a long and tortuous process of translating an idea into an innovative product. The idea must be developed, evaluated, and communicated to others, and this process is influenced by the organizational setting in which it transpires. The potentially useful new idea must pass over a number of hurdles before it is embraced as an organizational product.

A panel of judges composed of the leading scientists in the field of medical sociology was asked to evaluate the principal published results from each of the 115 research projects. Judges evaluated the research results on the basis of productivity and innovation.

Productivity was defined as the “extent to which the research represents an addition to knowledge along established lines of research or as extensions of previous theory.”

Innovativeness was defined as “additions to knowledge through new lines of research or the development of new theoretical statements of findings that were not explicit in previous theory. Innovation, in other words, involved raising new questions and developing new approaches to the acquisition of knowledge, as distinct from working productively within an already established framework. This same definition applies to innovation in intelligence analysis.

Andrews found virtually no relationship between the scientists’ creative ability and the innovativeness of their research. (There was also no relationship between level of intelligence and innovativeness.) Those who scored high on tests of creative ability did not necessarily receive high ratings from the judges evaluating the innovativeness of their work. A possible explanation is that either creative ability or innovation, or both, were not measured accurately, but Andrews argues persuasively for another view. Various social and psychological factors have so great an effect on the steps needed to translate creative ability into an innovative research product that there is no measurable effect traceable to creative ability alone. In order to document this conclusion, Andrews analyzed data from the questionnaires in which the scientists described their work environment.

Andrews found that scientists possessing more creative ability produced more innovative work only under the following favorable conditions:

  • When the scientist perceived himself or herself as responsible for initiating new activities. The opportunity for innovation, and the encouragement of it, are–not surprisingly–important variables.
  • When the scientist had considerable control over decisionmaking concerning his or her research program–in other words, the freedom to set goals, hire research assistants, and expend funds. Under these circumstances, a new idea is less likely to be snuffed out before it can be developed into a creative and useful product.
  • When the scientist felt secure and comfortable in his or her professional role. New ideas are often disruptive, and pursuing them carries the risk of failure. People are more likely to advance new ideas if they feel secure in their positions.
  • When the scientist’s administrative superior “stayed out of the way.” Research is likely to be more innovative when the superior limits himself or herself to support and facilitation rather than direct involvement.
  • When the project was relatively small with respect to the number of people involved, budget, and duration. Small size promotes flexibility, and this in turn is more conducive to creativity.
  • When the scientist engaged in other activities, such as teaching or administration, in addition to the research project. Other work may provide useful stimulation or help one identify opportunities for developing or implementing new ideas. Some time away from the task, or an incubation period, is generally recognized as part of the creative process.”

The importance of any one of these factors was not very great, but their impact was cumulative. The presence of all or most of these conditions exerted a strongly favorable influence on the creative process. Conversely, the absence of these conditions made it quite unlikely that even highly creative scientists could develop their new ideas into innovative research results. Under unfavorable conditions, the most creatively inclined scientists produced even less innovative work than their less imaginative colleagues, presumably because they experienced greater frustration with their work environment.

There are, of course, exceptions to the rule. Some creativity occurs even in the face of intense opposition. A hostile environment can be stimulating, enlivening, and challenging. Some people gain satisfaction from viewing themselves as lonely fighters in the wilderness, but when it comes to conflict between a large organization and a creative individual within it, the organization generally wins.

Recognizing the role of organizational environment in stimulating or suppressing creativity points the way to one obvious set of measures to enhance creative organizational performance. Managers of analysis, from first-echelon supervisors to the Director of Central Intelligence, should take steps to strengthen and broaden the perception among analysts that new ideas are welcome. This is not easy; creativity implies criticism of that which already exists. It is, therefore, inherently disruptive of established ideas and organizational practices.

Particularly within his or her own office, an analyst needs to enjoy a sense of security, so that partially developed ideas may be expressed and bounced off others as sounding boards with minimal fear of criticism or ridicule for deviating from established orthodoxy. At its inception, a new idea is frail and vulnerable. It needs to be nurtured, developed, and tested in a protected environment before being exposed to the harsh reality of public criticism. It is the responsibility of an analyst’s immediate supervisor and office colleagues to provide this sheltered environment.

Conclusions

Creativity, in the sense of new and useful ideas, is at least as important in intelligence analysis as in any other human endeavor. Procedures to enhance innovative thinking are not new. Creative thinkers have employed them successfully for centuries. The only new elements–and even they may not be new anymore–are the grounding of these procedures in psychological theory to explain how and why they work, and their formalization in systematic creativity programs.

Another prerequisite to creativity is sufficient strength of character to suggest new ideas to others, possibly at the expense of being rejected or even ridiculed on occasion. “The ideas of creative people often lead them into direct conflict with the trends of their time, and they need the courage to be able to stand alone.”

Chapter 7

Structuring Analytical Problems

This chapter discusses various structures for decomposing and externalizing complex analytical problems when we cannot keep all the relevant factors in the forefront of our consciousness at the same time.

Decomposition means breaking a problem down into its component parts. Externalization means getting the problem out of our heads and into some visible form that we can work with.

There are two basic tools for dealing with complexity in analysis–decomposition and externalization.

Decomposition means breaking a problem down into its component parts. That is, indeed, the essence of analysis. Webster’s Dictionary defines analysis as division of a complex whole into its parts or elements.

The spirit of decision analysis is to divide and conquer: Decompose a complex problem into simpler problems, get one’s thinking straight in these simpler problems, paste these analyses together with a logical glue …

Externalization means getting the decomposed problem out of one’s head and down on paper or on a computer screen in some simplified form that shows the main variables, parameters, or elements of the problem and how they relate to each other.

Putting ideas into visible form ensures that they will last. They will lie around for days goading you into having further thoughts. Lists are effective because they exploit people’s tendency to be a bit compulsive–we want to keep adding to them. They let us get the obvious and habitual answers out of the way, so that we can add to the list by thinking of other ideas beyond those that came first to mind. One specialist in creativity has observed that “for the purpose of moving our minds, pencils can serve as crowbars” –just by writing things down and making lists that stimulate new associations.

Problem Structure

Anything that has parts also has a structure that relates these parts to each other. One of the first steps in doing analysis is to determine an appropriate structure for the analytical problem, so that one can then identify the various parts and begin assembling information on them. Because there are many different kinds of analytical problems, there are also many different ways to structure analysis.

Lists such as Franklin made are one of the simplest structures. An intelligence analyst might make lists of relevant variables, early warning indicators, alternative explanations, possible outcomes, factors a foreign leader will need to take into account when making a decision, or arguments for and against a given explanation or outcome.

Other tools for structuring a problem include outlines, tables, diagrams, trees, and matrices, with many sub-species of each. For example, trees include decision trees and fault trees. Diagrams includes causal diagrams, influence diagrams, flow charts, and cognitive maps.

Chapter 8
Analysis of Competing Hypotheses

Analysis of competing hypotheses, sometimes abbreviated ACH, is a tool to aid judgment on important issues requiring careful weighing of alternative explanations or conclusions. It helps an analyst overcome, or at least minimize, some of the cognitive limitations that make prescient intelligence analysis so difficult to achieve.

ACH is an eight-step procedure grounded in basic insights from cognitive

psychology, decision analysis, and the scientific method. It is a surprisingly effective, proven process that helps analysts avoid common analytic pitfalls. Because of its thoroughness, it is particularly appropriate for controversial issues when analysts want to leave an audit trail to show what they considered and how they arrived at their judgement.

When working on difficult intelligence issues, analysts are, in effect, choosing among several alternative hypotheses. Which of several possible explanations is the correct one? Which of several possible outcomes is the most likely one? As previously noted, this book uses the term “hypothesis” in its broadest sense as a potential explanation or conclusion that is to be tested by collecting and presenting evidence.

Analysis of competing hypotheses (ACH) requires an analyst to explicitly identify all the reasonable alternatives and have them compete against each other for the analyst’s favor, rather than evaluating their plausibility one at a time.

The way most analysts go about their business is to pick out what they suspect intuitively is the most likely answer, then look at the available information from the point of view of whether or not it supports this answer. If the evidence seems to support the favorite hypothesis, analysts pat themselves on the back (“See, I knew it all along!”) and look no further. If it does not, they either reject the evidence as misleading or develop another hypothesis and go through the same procedure again.

Simultaneous evaluation of multiple, competing hypotheses is very difficult to do. To retain three to five or even seven hypotheses in working memory and note how each item of information fits into each hypothesis is beyond the mental capabilities of most people. It takes far greater mental agility than listing evidence supporting a single hypothesis that was pre-judged as the most likely answer. It can be accomplished, though, with the help of the simple procedures discussed here. The box below contains a step-by-step outline of the ACH process.

Step 1

Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.

Step-by-Step Outline of Analysis of Competing Hypotheses

  1. Identify the possible hypotheses to be considered. Use a group of analysts with different perspectives to brainstorm the possibilities.
  2. Make a list of significant evidence and arguments for and against each hypothesis.
  3. Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments–that is, identify which items are most helpful in judging the relative likelihood of the hypotheses.
  4. Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.
  5. Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove the hypotheses rather than prove them.
  6. Analyze how sensitive your conclusion is to a few critical items of evidence. Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.
  7. Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.
  8. Identify milestones for future observation that may indicate events are taking a different course than expected.

It is useful to make a clear distinction between the hypothesis generation and hypothesis evaluation stages of analysis. Step 1 of the recommended analytical process is to identify all hypotheses that merit detailed examination. At this early hypothesis generation stage, it is very useful to bring together a group of analysts with different backgrounds and perspectives. Brainstorming in a group stimulates the imagination and may bring out possibilities that individual members of the group had not thought of. Initial discussion in the group should elicit every possibility, no matter how remote, before judging likelihood or feasibility. Only when all the possibilities are on the table should you then focus on judging them and selecting the hypotheses to be examined in greater detail in subsequent analysis.

Early rejection of unproven, but not disproved, hypotheses biases the subsequent analysis, because one does not then look for the evidence that might support them. Unproven hypotheses should be kept alive until they can be disproved.

Step 2

Make a list of significant evidence and arguments for and against each hypothesis.

In assembling the list of relevant evidence and arguments, these terms should be interpreted very broadly. They refer to all the factors that have an impact on your judgments about the hypotheses. Do not limit yourself to concrete evidence in the current intelligence reporting. Also include your own assumptions or logical deductions about another person’s or group’s or country’s intentions, goals, or standard procedures. These assumptions may generate strong preconceptions as to which hypothesis is most likely. Such assumptions often drive your final judgment, so it is important to include them in the list of “evidence.”

First, list the general evidence that applies to all the hypotheses. Then consider each hypothesis individually, listing factors that tend to support or contradict each one. You will commonly find that each hypothesis leads you to ask different questions and, therefore, to seek out somewhat different evidence.

Step 3

Prepare a matrix with hypotheses across the top and evidence down the side. Analyze the “diagnosticity” of the evidence and arguments- that is, identify which items are most helpful in judging the relative likelihood of alternative hypotheses.

Step 3 is perhaps the most important element of this analytical procedure. It is also the step that differs most from the natural, intuitive approach to analysis, and, therefore, the step you are most likely to overlook or misunderstand.

The procedure for Step 3 is to take the hypotheses from Step 1 and the evidence and arguments from Step 2 and put this information into a matrix format, with the hypotheses across the top and evidence and arguments down the side. This gives an overview of all the significant components of your analytical problem.

Then analyze how each piece of evidence relates to each hypothesis. This differs from the normal procedure, which is to look at one hypothesis at a time in order to consider how well the evidence supports that hypothesis. That will be done later, in Step 5. At this point, in Step 3, take one item of evidence at a time, then consider how consistent that evidence is with each of the hypotheses. Here is how to remember this distinction. In Step 3, you work acrossthe rows of the matrix, examining one item of evidence at a time to see how consistent that item of evidence is with each of the hypotheses. In Step 5, you work down the columns of the matrix, examining one hypothesis at a time, to see how consistent that hypothesis is with all the evidence.

To fill in the matrix, take the first item of evidence and ask whether it is consistent with, inconsistent with, or irrelevant to each hypothesis. Then make a notation accordingly in the appropriate cell under each hypothesis in the matrix. The form of these notations in the matrix is a matter of personal preference. It may be pluses, minuses, and question marks. It may be C, I, and N/A standing for consistent, inconsistent, or not applicable. Or it may be some textual notation. In any event, it will be a simplification, a shorthand representation of the complex reasoning that went on as you thought about how the evidence relates to each hypothesis.

After doing this for the first item of evidence, then go on to the next item of evidence and repeat the process until all cells in the matrix are filled

The matrix format helps you weigh the diagnosticity of each item of evidence, which is a key difference between analysis of competing hypotheses and traditional analysis.

Evidence is diagnostic when it influences your judgment on the relative likelihood of the various hypotheses identified in Step 1. If an item of evidence seems consistent with all the hypotheses, it may have no diagnostic value. A common experience is to discover that most of the evidence supporting what you believe is the most likely hypothesis really is not very helpful, because that same evidence is also consistent with other hypotheses. When you do identify items that are highly diagnostic, these should drive your judgment.

Step 4

Refine the matrix. Reconsider the hypotheses and delete evidence and arguments that have no diagnostic value.

The exact wording of the hypotheses is obviously critical to the conclusions one can draw from the analysis. By this point, you will have seen how the evidence breaks out under each hypothesis, and it will often be appropriate to reconsider and reword the hypotheses. Are there hypotheses that need to be added, or finer distinctions that need to be made in order to consider all the significant alternatives? If there is little or no evidence that helps distinguish between two hypotheses, should they be combined into one?

Also reconsider the evidence. Is your thinking about which hypotheses are most likely and least likely influenced by factors that are not included in the listing of evidence? If so, put them in. Delete from the matrix items of evidence or assumptions

that now seem unimportant or have no diagnostic value. Save these items in a separate list as a record of information that was considered.

Step 5

Draw tentative conclusions about the relative likelihood of each hypothesis. Proceed by trying to disprove hypotheses rather than prove them.

In Step 3, you worked across the matrix, focusing on a single item of evidence or argument and examining how it relates to each hypothesis. Now, work down the matrix, looking at each hypothesis as a whole. The matrix format gives an overview of all the evidence for and against all the hypotheses, so that you can examine all the hypotheses together and have them compete against each other for your favor.

In evaluating the relative likelihood of alternative hypotheses, start by looking for evidence or logical deductions that enable you to reject hypotheses, or at least to determine that they are unlikely. A fundamental precept of the scientific method is to proceed by rejecting or eliminating hypotheses, while tentatively accepting only those hypotheses that cannot be refuted. The scientific method obviously cannot be applied in toto to intuitive judgment, but the principle of seeking to disprove hypotheses, rather than confirm them, is useful.

No matter how much information is consistent with a given hypothesis, one cannot prove that hypothesis is true, because the same information may also be consistent with one or more other hypotheses. On the other hand, a single item of evidence that is inconsistent with a hypothesis may be sufficient grounds for rejecting that hypothesis.

People have a natural tendency to concentrate on confirming hypotheses they already believe to be true, and they commonly give more weight to information that supports a hypothesis than to information that weakens it. This is wrong; we should do just the opposite. Step 5 again requires doing the opposite of what comes naturally.

In examining the matrix, look at the minuses, or whatever other notation you used to indicate evidence that may be inconsistent with a hypothesis. The hypotheses with the fewest minuses is probably the most likely one. The hypothesis with the most minuses is probably the least likely one. The fact that a hypothesis is inconsistent with the evidence is certainly a sound basis for rejecting it. The pluses, indicating evidence that is consistent with a hypothesis, are far less significant. It does not follow that the hypothesis with the most pluses is the most likely one, because a long list of evidence that is consistent with almost any reasonable hypothesis can be easily made. What is difficult to find, and is most significant when found, is hard evidence that is clearly inconsistent with a reasonable hypothesis.

The matrix should not dictate the conclusion to you. Rather, it should accurately reflect your judgment of what is important and how these important factors relate to the probability of each hypothesis. You, not the matrix, must make the decision. The matrix serves only as an aid to thinking and analysis, to ensure consideration of all the possible interrelationships between evidence and hypotheses and identification of those few items that really swing your judgment on the issue.

If following this procedure has caused you to consider things you might otherwise have overlooked, or has caused you to revise your earlier estimate of the relative probabilities of the hypotheses, then the procedure has served a useful purpose. When you are done, the matrix serves as a shorthand record of your thinking and as an audit trail showing how you arrived at your conclusion.

This procedure forces you to spend more analytical time than you otherwise would on what you had thought were the less likely hypotheses. This is desirable. The seemingly less likely hypotheses usually involve plowing new ground and, therefore, require more work. What you started out thinking was the most likely hypothesis tends to be based on a continuation of your own past thinking. A principal advantage of the analysis of competing hypotheses is that it forces you to give a fairer shake to all the alternatives.

Step 6

Analyze how sensitive your conclusion is to a few critical items of evidence.

Consider the consequences for your analysis if that evidence were wrong, misleading, or subject to a different interpretation.

If there is any concern at all about denial and deception, this is an appropriate place to consider that possibility. Look at the sources of your key evidence. Are any of the sources known to the authorities in the foreign country? Could the information have been manipulated? Put yourself in the shoes of a foreign deception planner to evaluate motive, opportunity, means, costs, and benefits of deception as they might appear to the foreign country.

When analysis turns out to be wrong, it is often because of key assumptions that went unchallenged and proved invalid. It is a truism that analysts should identify and question assumptions, but this is much easier said than done. The problem is to determine which assumptions merit questioning. One advantage of the ACH procedure is that it tells you what needs to be rechecked.

In Step 6 you may decide that additional research is needed to check key judgments. For example, it may be appropriate to go back to check original source materials rather than relying on someone else’s interpretation. In writing your report, it is desirable to identify critical assumptions that went into your interpretation and to note that your conclusion is dependent upon the validity of these assumptions.

Step 7

Report conclusions. Discuss the relative likelihood of all the hypotheses, not just the most likely one.

If your report is to be used as the basis for decisionmaking, it will be helpful for the decisionmaker to know the relative likelihood of all the alternative possibilities. Analytical judgments are never certain. There is always a good possibility of their being wrong. Decisionmakers need to make decisions on the basis of a full set of alternative possibilities, not just the single most likely alternative. Contingency or fallback plans may be needed in case one of the less likely alternatives turns out to be true.

When one recognizes the importance of proceeding by eliminating rather than confirming hypotheses, it becomes apparent that any written argument for a certain judgment is incomplete unless it also discusses alternative judgments that were considered and why they were rejected. In the past, at least, this was seldom done.

Step 8

Identify milestones for future observation that may indicate events are taking a different course than expected.

Analytical conclusions should always be regarded as tentative. The situation may change, or it may remain unchanged while you receive new information that alters your appraisal. It is always helpful to specify in advance things one should look for or be alert to that, if observed, would suggest a significant change in the probabilities. This is useful for intelligence consumers who are following the situation on a continuing basis. Specifying in advance what would cause you to change your mind will also make it more difficult for you to rationalize such developments, if they occur, as not really requiring any modification of your judgment.

Summary and Conclusion

Three key elements distinguish analysis of competing hypotheses from conventional intuitive analysis.

  • Analysis starts with a full set of alternative possibilities, rather than with a most likely alternative for which the analyst seeks confirmation. This ensures that alternative hypotheses receive equal treatment and a fair shake.
  • Analysis identifies and emphasizes the few items of evidence or assumptions that have the greatest diagnostic value in judging the relative likelihood of the alternative hypotheses. In conventional intuitive analysis, the fact that key evidence may also be consistent with alternative hypotheses is rarely considered explicitly and often ignored.
  • Analysis of competing hypotheses involves seeking evidence to refute hypotheses. The most probable hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it. Conventional analysis generally entails looking for evidence to confirm a favored hypothesis.

A principal lesson is this. Whenever an intelligence analyst is tempted to write the phrase “there is no evidence that …,” the analyst should ask this question: If this hypothesis is true, can I realistically expect to see evidence of it? In other words, if India were planning nuclear tests while deliberately concealing its intentions, could the analyst realistically expect to see evidence of test planning? The ACH procedure leads the analyst to identify and face these kinds of questions.

There is no guarantee that ACH or any other procedure will produce a correct answer. The result, after all, still depends on fallible intuitive judgment applied to incomplete and ambiguous information. Analysis of competing hypotheses does, however, guarantee an appropriate process of analysis. This procedure leads you through a rational, systematic process that avoids some common analytical pitfalls. It increases the odds of getting the right answer, and it leaves an audit trail showing the evidence used in your analysis and how this evidence was interpreted. If others disagree with your judgment, the matrix can be used to highlight the precise area of disagreement. Subsequent discussion can then focus productively on the ultimate source of the differences.

The ACH procedure has the offsetting advantage of focusing attention on the few items of critical evidence that cause the uncertainty or which, if they were available, would alleviate it. This can guide future collection, research, and analysis to resolve the uncertainty and produce a more accurate judgment.

PART THREE–COGNITIVE BIASES

Chapter 9
What Are Cognitive Biases?

This mini-chapter discusses the nature of cognitive biases in general. The four chapters that follow it describe specific cognitive biases in the evaluation of evidence, perception of cause and effect, estimation of probabilities, and evaluation of intelligence reporting.

Cognitive biases are mental errors caused by our simplified information processing strategies. It is important to distinguish cognitive biases from other forms of bias, such as cultural bias, organizational bias, or bias that results from one’s own self- interest. In other words, a cognitive bias does not result from any emotional or intellectual predisposition toward a certain judgment, but rather from subconscious mental procedures for processing information. A cognitive bias is a mental error that is consistent and predictable.

Cognitive biases are similar to optical illusions in that the error remains compelling even when one is fully aware of its nature. Awareness of the bias, by itself, does not produce a more accurate perception. Cognitive biases, therefore, are, exceedingly difficult to overcome.

Chapter 10
Biases in Evaluation of Evidence

Evaluation of evidence is a crucial step in analysis, but what evidence people rely on and how they interpret it are influenced by a variety of extraneous factors. Information presented in vivid and concrete detail often has unwarranted impact, and people tend to disregard abstract or statistical information that may have greater evidential value. We seldom take the absence of evidence into account. The human mind is also oversensitive to the consistency of the evidence, and insufficiently sensitive to the reliability of the evidence. Finally, impressions often remain even after the evidence on which they are based has been totally discredited.

The intelligence analyst works in a somewhat unique informational environment. Evidence comes from an unusually diverse set of sources: newspapers and wire services, observations by American Embassy officers, reports from controlled agents and casual informants, information exchanges with foreign governments, photo reconnaissance, and communications intelligence. Each source has its own unique strengths, weaknesses, potential or actual biases, and vulnerability to manipulation and deception. The most salient characteristic of the information environment is its diversity–multiple sources, each with varying degrees of reliability, and each commonly reporting information which by itself is incomplete and sometimes inconsistent or even incompatible with reporting from other sources. Conflicting information of uncertain reliability is endemic to intelligence analysis, as is the need to make rapid judgments on current events even before all the evidence is in.

The Vividness Criterion

The impact of information on the human mind is only imperfectly related to its true value as evidence. Specifically, information that is vivid, concrete, and personal has a greater impact on our thinking than pallid, abstract information that may actually have substantially greater value as evidence. For example:

  • Information that people perceive directly, that they hear with their own ears or see with their own eyes, is likely to have greater impact than information received secondhand that may have greater evidential value.
  • Case histories and anecdotes will have greater impact than more informative but abstract aggregate or statistical data.

Events that people experience personally are more memorable than those they only read about. Concrete words are easier to remember than abstract words, and words of all types are easier to recall than numbers. In short, information having the qualities cited in the preceding paragraph is more likely to attract and hold our attention. It is more likely to be stored and remembered than abstract reasoning or statistical summaries, and therefore can be expected to have a greater immediate effect as well as a continuing impact on our thinking in the future.

Personal observations by intelligence analysts and agents can be as deceptive as secondhand accounts. Most individuals visiting foreign countries become familiar with only a small sample of people representing a narrow segment of the total society. Incomplete and distorted perceptions are a common result.

a “man-who” example seldom merits the evidential weight intended by the person citing the example, or the weight often accorded to it by the recipient.

The most serious implication of vividness as a criterion that determines the impact of evidence is that certain kinds of very valuable evidence will have little influence simply because they are abstract. Statistical data, in particular, lack the rich and concrete detail to evoke vivid images, and they are often overlooked, ignored, or minimized.

For example, the Surgeon General’s report linking cigarette smoking to cancer should have, logically, caused a decline in per-capita cigarette consumption. No such decline occurred for more than 20 years. The reaction of physicians was particularly informative. All doctors were aware of the statistical evidence and were more exposed than the general population to the health problems caused by smoking. How they reacted to this evidence depended upon their medical specialty. Twenty years after the Surgeon General’s report, radiologists who examine lung x-rays every day had the lowest rate of smoking. Physicians who diagnosed and treated lung cancer victims were also quite unlikely to smoke. Many other types of physicians continued to smoke. The probability that a physician continued to smoke was directly related to the distance of the physician’s specialty from the lungs. In other words, even physicians, who were well qualified to understand and appreciate the statistical data, were more influenced by their vivid personal experiences than by valid statistical data.

Absence of Evidence

A principal characteristic of intelligence analysis is that key information is often lacking. Analytical problems are selected on the basis of their importance and the perceived needs of the consumers, without much regard for availability of information. Analysts have to do the best they can with what they have, somehow taking into account the fact that much relevant information is known to be missing.

Ideally, intelligence analysts should be able to recognize what relevant evidence is lacking and factor this into their calculations. They should also be able to estimate the potential impact of the missing data and to adjust confidence in their judgment accordingly. Unfortunately, this ideal does not appear to be the norm. Experiments suggest that “out of sight, out of mind” is a better description of the impact of gaps in the evidence.

This problem has been demonstrated using fault trees, which are schematic drawings showing all the things that might go wrong with any endeavor. Fault trees are often used to study the fallibility of complex systems such as a nuclear reactor or space capsule.

Missing data is normal in intelligence problems, but it is probably more difficult to recognize that important information is absent and to incorporate this fact into judgments on intelligence questions than in the more concrete “car won’t start” experiment.

Oversensitivity to Consistency

The internal consistency in a pattern of evidence helps determine our confidence in judgments based on that evidence. In one sense, consistency is clearly an appropriate guideline for evaluating evidence. People formulate alternative explanations or estimates and select the one that encompasses the greatest amount of evidence within a logically consistent scenario. Under some circumstances, however, consistency can be deceptive. Information may be consistent only because it is highly correlated or redundant, in which case many related reports may be no more informative than a single report. Or it may be consistent only because information is drawn from a very small sample or a biased sample.

If the available evidence is consistent, analysts will often overlook the fact that it represents a very small and hence unreliable sample taken from a large and heterogeneous group. This is not simply a matter of necessity– of having to work with the information on hand, however imperfect it may be. Rather, there is an illusion of validity caused by the consistency of the information.

The tendency to place too much reliance on small samples has been dubbed the “law of small numbers.” This is a parody on the law of large numbers, the basic statistical principle that says very large samples will be highly representative of the population from which they are drawn. This is the principle that underlies opinion polling, but most people are not good intuitive statisticians. People do not have much intuitive feel for how large a sample has to be before they can draw valid conclusions from it. The so-called law of small numbers means that, intuitively, we make the mistake of treating small samples as though they were large ones.

Coping with Evidence of Uncertain Accuracy

There are many reasons why information often is less than perfectly accurate: misunderstanding, misperception, or having only part of the story; bias on the part of the ultimate source; distortion in the reporting chain from subsource through source, case officer, reports officer, to analyst; or misunderstanding and misperception by the analyst. Further, much of the evidence analysts bring to bear in conducting analysis is retrieved from memory, but analysts often cannot remember even the source of information they have in memory let alone the degree of certainty they attributed to the accuracy of that information when it was first received.

The human mind has difficulty coping with complicated probabilistic relationships,

so people tend to employ simple rules of thumb that reduce the burden of processing such information. In processing information of uncertain accuracy or reliability, analysts tend to make a simple yes or no decision. If they reject the evidence, they tend to reject it fully, so it plays no further role in their mental calculations. If they accept the evidence, they tend to accept it wholly, ignoring the probabilistic nature of the accuracy or reliability judgment. This is called a “best guess” strategy.

A more sophisticated strategy is to make a judgment based on an assumption that the available evidence is perfectly accurate and reliable, then reduce the confidence in this judgment by a factor determined by the assessed validity of the information. For example, available evidence may indicate that an event probably (75 percent) will occur, but the analyst cannot be certain that the evidence on which this judgment is based is wholly accurate or reliable.

The same processes may also affect our reaction to information that is plausible but known from the beginning to be of questionable authenticity. Ostensibly private statements by foreign officials are often reported though intelligence channels. In many instances it is not clear whether such a private statement by a foreign ambassador, cabinet member, or other official is an actual statement of private views, an indiscretion, part of a deliberate attempt to deceive the US Government, or part of an approved plan to convey a truthful message that the foreign government believes is best transmitted through informal channels.

Knowing that the information comes from an uncontrolled source who may be trying to manipulate us does not necessarily reduce the impact of the information.

Persistence of Impressions Based on Discredited

Evidence

Impressions tend to persist even after the evidence that created those impressions has been fully discredited. Psychologists have become interested in this phenomenon because many of their experiments require that the test subjects be deceived. For example, test subjects may be made to believe they were successful or unsuccessful in performing some task, or that they possess certain abilities or personality traits, when this is not in fact the case. Professional ethics require that test subjects be disabused of these false impressions at the end of the experiment, but this has proved surprisingly difficult to achieve.

Test subjects’ erroneous impressions concerning their logical problem-solving abilities persevered even after they were informed that manipulation of good or poor teaching performance had virtually guaranteed their success or failure.

An interesting but speculative explanation is based on the strong tendency to seek causal explanations, as discussed in the next chapter. When evidence is first received, people postulate a set of causal connections that explains this evidence.

The stronger the perceived causal linkage, the stronger the impression created by the evidence.

Colloquially, one might say that once information rings a bell, the bell cannot be unrung.

The ambiguity of most real-world situations contributes to the operation of this perseverance phenomenon. Rarely in the real world is evidence so thoroughly discredited as is possible in the experimental laboratory. Imagine, for example, that you are told that a clandestine source who has been providing information for some time is actually under hostile control.

Chapter 11
Biases in Perception of Cause and Effect

Judgments about cause and effect are necessary to explain the past, understand the present, and estimate the future. These judgments are often biased by factors over which people exercise little conscious control, and this can influence many types of judgments made by intelligence analysts. Because of a need to impose order on our environment, we seek and often believe we find causes for what are actually accidental or random phenomena. People overestimate the extent to which other countries are pursuing a coherent, coordinated, rational plan, and thus also overestimate their own ability to predict future events in those nations. People also tend to assume that causes are similar to their effects, in the sense that important or large effects must have large causes.

When inferring the causes of behavior, too much weight is accorded to personal qualities and dispositions of the actor and not enough to situational determinants of the actor’s behavior. People also overestimate their own importance as both a cause and a target of the behavior of others. Finally, people often perceive relationships that do not in fact exist, because they do not have an intuitive understanding of the kinds and amount of information needed to prove a relationship.

There are several modes of analysis by which one might infer cause and effect. In more formal analysis, inferences are made through procedures that collectively comprise the scientific method. The scientist advances a hypothesis, then tests this hypothesis by the collection and statistical analysis of data on many instances of the phenomenon in question. Even then, causality cannot be proved beyond all possible doubt. The scientist seeks to disprove a hypothesis, not to confirm it. A hypothesis is accepted only when it cannot be rejected.

Collection of data on many comparable cases to test hypotheses about cause and effect is not feasible for most questions of interest to the Intelligence Community, especially questions of broad political or strategic import relating to another country’s intentions. To be sure, it is feasible more often than it is done, and increased use of scientific procedures in political, economic, and strategic research is much to be encouraged. But the fact remains that the dominant approach to intelligence analysis is necessarily quite different. It is the approach of the historian rather than the scientist, and this approach presents obstacles to accurate inferences about causality.

The key ideas here are coherence and narrative. These are the principles that guide the organization of observations into meaningful structures and patterns. The historian commonly observes only a single case, not a pattern of covariation (when two things are related so that change in one is associated with change in the other) in many comparable cases. Moreover, the historian observes simultaneous changes in so many variables that the principle of covariation generally is not helpful in sorting out the complex relationships among them. The narrative story, on the other hand, offers a means of organizing the rich complexity of the historian’s observations. The historian uses imagination to construct a coherent story out of fragments of data.

The intelligence analyst employing the historical mode of analysis is essentially a storyteller.

He or she constructs a plot from the previous events, and this plot then dictates the possible endings of the incomplete story. The plot is formed of the “dominant concepts or leading ideas” that the analyst uses to postulate patterns of relationships among the available data. The analyst is not, of course, preparing a work of fiction. There are constraints on the analyst’s imagination, but imagination is nonetheless involved because there is an almost unlimited variety of ways in which the available data might be organized to tell a meaningful story. The constraints are the available evidence and the principle of coherence. The story must form a logical and coherent whole and be internally consistent as well as consistent with the available evidence.

Recognizing that the historical or narrative mode of analysis involves telling a coherent story helps explain the many disagreements among analysts, inasmuch as coherence is a subjective concept. It assumes some prior beliefs or mental model about what goes with what. More relevant to this discussion, the use of coherence rather than scientific observation as the criterion for judging truth leads to biases that presumably influence all analysts to some degree. Judgments of coherence may be influenced by many extraneous factors, and if analysts tend to favor certain types of explanations as more coherent than others, they will be biased in favor of those explanations.

Bias in Favor of Causal Explanations

One bias attributable to the search for coherence is a tendency to favor causal explanations. Coherence implies order, so people naturally arrange observations into regular patterns and relationships. If no pattern is apparent, our first thought is that we lack understanding, not that we are dealing with random phenomena that have no purpose or reason.

These examples suggest that in military and foreign affairs, where the patterns are at best difficult to fathom, there may be many events for which there are no valid causal explanations. This certainly affects the predictability of events and suggests limitations on what might logically be expected of intelligence analysts.

Bias Favoring Perception of Centralized Direction

Very similar to the bias toward causal explanations is a tendency to see the actions of other governments (or groups of any type) as the intentional result of centralized direction and planning. “…most people are slow to perceive accidents, unintended consequences, coincidences, and small causes leading to large effects. Instead, coordinated actions, plans and conspiracies are seen.” Analysts overestimate the extent to which other countries are pursuing coherent, rational, goal-maximizing policies, because this makes for more coherent, logical, rational explanations. This bias also leads analysts and policymakers alike to overestimate the predictability of future events in other countries.

But a focus on such causes implies a disorderly world in which outcomes are determined more by chance than purpose. It is especially difficult to incorporate these random and usually unpredictable elements into a coherent narrative, because evidence is seldom available to document them on a timely basis. It is only in historical perspective, after memoirs are written and government documents released, that the full story becomes available.

This bias has important consequences. Assuming that a foreign government’s actions result from a logical and centrally directed plan leads an analyst to:

  • Have expectations regarding that government’s actions that may not be fulfilled if the behavior is actually the product of shifting or inconsistent values, bureaucratic bargaining, or sheer confusion and blunder.
  • Draw far-reaching but possibly unwarranted inferences from isolated statements or actions by government officials who may be acting on their own rather than on central direction.
  • Overestimate the United States’ ability to influence the other government’s actions.
  • Perceive inconsistent policies as the result of duplicity and Machiavellian maneuvers, rather than as the product of weak leadership, vacillation, or bargaining among diverse bureaucratic or political interests.

Similarity of Cause and Effect

When systematic analysis of covariation is not feasible and several alternative causal explanations seem possible, one rule of thumb people use to make judgments of cause and effect is to consider the similarity between attributes of the cause and attributes of the effect. Properties of the cause are “…inferred on the basis of being correspondent with or similar to properties of the effect.” Heavy things make heavy noises; dainty things move daintily; large animals leave large tracks. When dealing with physical properties, such inferences are generally correct.

The tendency to reason according to similarity of cause and effect is frequently found in conjunction with the previously noted bias toward inferring centralized direction. Together, they explain the persuasiveness of conspiracy theories. Such theories are invoked to explain large effects for which there do not otherwise appear to be correspondingly large causes.

Intelligence analysts are more exposed than most people to hard evidence of real plots, coups, and conspiracies in the international arena. Despite this–or perhaps because of it–most intelligence analysts are not especially prone to what are generally regarded as conspiracy theories. Although analysts may not exhibit this bias in such extreme form, the bias presumably does influence analytical judgments in myriad little ways. In examining causal relationships, analysts generally construct causal explanations that are somehow commensurate with the magnitude of their effects and that attribute events to human purposes or predictable forces rather than to human weakness, confusion, or unintended consequences.

Internal vs. External Causes of Behavior

Much research into how people assess the causes of behavior employs a basic dichotomy between internal determinants and external determinants of human actions. Internal causes of behavior include a person’s attitudes, beliefs, and personality. External causes include incentives and constraints, role requirements, social pressures, or other forces over which the individual has little control. The research examines the circumstances under which people attribute behavior either to stable dispositions of the actor or to characteristics of the situation to which the actor responds.

Differences in judgments about what causes another person’s or government’s behavior affect how people respond to that behavior. How people respond to friendly or unfriendly actions by others may be quite different if they attribute the behavior to the nature of the person or government than if they see the behavior as resulting from situational constraints over which the person or government has little control.

Not enough weight is assigned to external circumstances that may have influenced the other person’s choice of behavior. This pervasive tendency has been demonstrated in many experiments under quite diverse circumstances and has often been observed in diplomatic and military interactions.

Susceptibility to this biased attribution of causality depends upon whether people are examining their own behavior or observing that of others. It is the behavior of others that people tend to attribute to the nature of the actor, whereas they see their own behavior as conditioned almost entirely by the situation in which they find themselves. This difference is explained largely by differences in information available to actors and observers. People know a lot more about themselves.

The actor has a detailed awareness of the history of his or her own actions under similar circumstances. In assessing the causes of our own behavior, we are likely to consider our previous behavior and focus on how it has been influenced by different situations. Thus situational variables become the basis for explaining our own behavior. This contrasts with the observer, who typically lacks this detailed knowledge of the other person’s past behavior. The observer is inclined to focus on how the other person’s behavior compares with the behavior of others under similar circumstances.

This difference in the type and amount of information available to actors and observers applies to governments as well as people. An actor’s personal involvement with the actions being observed enhances the likelihood of bias. “Where the observer is also an actor, he is likely to exaggerate the uniqueness and emphasize the dispositional origins of the responses of others to his own actions.”

The persistent tendency to attribute cause and effect in this manner is not simply the consequence of self-interest or propaganda by the opposing sides. Rather, it is the readily understandable and predictable result of how people normally attribute causality under many different circumstances.

As a general rule, biased attribution of causality helps sow the seeds of mistrust and misunderstanding between people and between governments. We tend to have quite different perceptions of the causes of each other’s behavior.

Overestimating Our Own Importance

Individuals and governments tend to overestimate the extent to which they successfully influence the behavior of others. This is an exception to the previously noted generalization that observers attribute the behavior of others to the nature of the actor. It occurs largely because a person is so familiar with his or her own efforts to influence another, but much less well informed about other factors that may have influenced the other’s decision.

In estimating the influence of US policy on the actions of another government, analysts more often than not will be knowledgeable of US actions and what they are intended to achieve, but in many instances they will be less well informed concerning the internal processes, political pressures, policy conflicts, and other influences on the decision of the target government.

Illusory Correlation

At the start of this chapter, covariation was cited as one basis for inferring causality. It was noted that covariation may either be observed intuitively or measured statistically. This section examines the extent to which the intuitive perception of covariation deviates from the statistical measurement of covariation.

Statistical measurement of covariation is known as correlation. Two events are correlated when the existence of one event implies the existence of the other. Variables are correlated when a change in one variable implies a similar degree of change in another. Correlation alone does not necessarily imply causation. For example, two events might co-occur because they have a common cause, rather than because one causes the other. But when two events or changes do co-occur, and the time sequence is such that one always follows the other, people often infer that the first caused the second. Thus, inaccurate perception of correlation leads to inaccurate perception of cause and effect.

Judgments about correlation are fundamental to all intelligence analysis. For example, assumptions that worsening economic conditions lead to increased political support for an opposition party, that domestic problems may lead to foreign adventurism, that military government leads to unraveling of democratic institutions, or that negotiations are more successful when conducted from a position of strength are all based on intuitive judgments of correlation between these variables. In many cases these assumptions are correct, but they are seldom tested by systematic observation and statistical analysis.

Much intelligence analysis is based on common-sense assumptions about how people and governments normally behave. The problem is that people possess a great facility for invoking contradictory “laws” of behavior to explain, predict, or justify different actions occurring under similar circumstances. “Haste makes waste” and “He who hesitates is lost” are examples of inconsistent explanations and admonitions. They make great sense when used alone and leave us looking foolish when presented together. “Appeasement invites aggression” and “agreement is based upon compromise” are similarly contradictory expressions.

When confronted with such apparent contradictions, the natural defense is that “it all depends on. …” Recognizing the need for such qualifying statements is one of the differences between subconscious information processing and systematic, self- conscious analysis. Knowledgeable analysis might be identified by the ability to fill in the qualification; careful analysis by the frequency with which one remembers to do so.

Of the 86 test subjects involved in several runnings of this experiment, not a single one showed any intuitive understanding of the concept of correlation. That is, no one understood that to make a proper judgment about the existence of a relationship, one must have information on all four cells of the table.

Let us now consider a similar question of correlation on a topic of interest to intelligence analysts. What are the characteristics of strategic deception and how can analysts detect it? In studying deception, one of the important questions is: what are the correlates of deception? Historically, when analysts study instances of deception, what else do they see that goes along with it, that is somehow related to deception, and that might be interpret as an indicator of deception? Are there certain practices relating to deception, or circumstances under which deception is most likely to occur, that permit one to say, that, because we have seen x or y or z, this most likely means a deception plan is under way? This would be comparable to a doctor observing certain symptoms and concluding that a given disease may be present. This is essentially a problem of correlation. If one could identify several correlates of deception, this would significantly aid efforts to detect it.

The lesson to be learned is not that analysts should do a statistical analysis of every relationship. They usually will not have the data, time, or interest for that. But analysts should have a general understanding of what it takes to know whether a relationship exists. This understanding is definitely not a part of people’s intuitive knowledge. It does not come naturally. It has to be learned. When dealing with such issues, analysts have to force themselves to think about all four cells of the table and the data that would be required to fill each cell.

Even if analysts follow these admonitions, there are several factors that distort judgment when one does not follow rigorous scientific procedures in making and recording observations. These are factors that influence a person’s ability to recall examples that fit into the four cells. For example, people remember occurrences more readily than non-occurrences. “History is, by and large, a record of what people did, not what they failed to do.”

Many erroneous theories are perpetuated because they seem plausible and because people record their experience in a way that supports rather than refutes them.

Chapter 12
Biases in Estimating Probabilities

In making rough probability judgments, people commonly depend upon one of several simplified rules of thumb that greatly ease the burden of decision. Using the “availability” rule, people judge the probability of an event by the ease with which they can imagine relevant instances of similar events or the number of such events that they can easily remember. With the “anchoring” strategy, people pick some natural starting point for a first approximation and then adjust this figure based on the results of additional information or analysis. Typically, they do not adjust the initial judgment enough.

Expressions of probability, such as possible and probable, are a common source of ambiguity that make it easier for a reader to interpret a report as consistent with the reader’s own preconceptions. The probability of a scenario is often miscalculated. Data on “prior probabilities” are commonly ignored unless they illuminate causal relationships.

Availability Rule

One simplified rule of thumb commonly used in making probability estimates is known as the availability rule. In this context, “availability” refers to imaginability or retrievability from memory. Psychologists have shown that two cues people use unconsciously in judging the probability of an event are the ease with which they can imagine relevant instances of the event and the number or frequency of such events that they can easily remember. People are using the availability rule of thumb whenever they estimate frequency or probability on the basis of how easily they can recall or imagine instances of whatever it is they are trying to estimate.

people are frequently led astray when the ease with which things come to mind is influenced by factors unrelated to their probability. The ability to recall instances of an event is influenced by how recently the event occurred, whether we were personally involved, whether there were vivid and memorable details associated with the event, and how important it seemed at the time. These and other factors that influence judgment are unrelated to the true probability of an event.

Intelligence analysts may be less influenced than others by the availability bias. Analysts are evaluating all available information, not making quick and easy inferences. On the other hand, policymakers and journalists who lack the time or access to evidence to go into details must necessarily take shortcuts. The obvious shortcut is to use the availability rule of thumb for making inferences about probability.

Many events of concern to intelligence analysts

…are perceived as so unique that past history does not seem relevant to the evaluation of their likelihood. In thinking of such events we often construct scenarios, i.e., stories that lead from the present situation to the target event. The plausibility of the scenarios that come to mind, or the difficulty of producing them, serve as clues to the likelihood of the event. If no reasonable scenario comes to mind, the event is deemed impossible or highly unlikely. If several scenarios come easily to mind, or if one scenario is particularly compelling, the event in question appears probable.

Many extraneous factors influence the imaginability of scenarios for future events, just as they influence the retrievability of events from memory. Curiously, one of these is the act of analysis itself. The act of constructing a detailed scenario for a possible future event makes that event more readily imaginable and, therefore, increases its perceived probability. This is the experience of CIA analysts who have used various tradecraft tools that require, or are especially suited to, the analysis of unlikely but nonetheless possible and important hypotheses.

In sum, the availability rule of thumb is often used to make judgments about likelihood or frequency. People would be hard put to do otherwise, inasmuch as it is such a timesaver in the many instances when more detailed analysis is not warranted or not feasible. Intelligence analysts, however, need to be aware when they are taking shortcuts. They must know the strengths and weaknesses of these procedures…

For intelligence analysts, recognition that they are employing the availability rule should raise a caution flag. Serious analysis of probability requires identification and assessment of the strength and interaction of the many variables that will determine the outcome of a situation.

Anchoring

Another strategy people seem to use intuitively and unconsciously to simplify the task of making judgments is called anchoring. Some natural starting point, perhaps from a previous analysis of the same subject or from some partial calculation, is used as a first approximation to the desired judgment. This starting point is then adjusted, based on the results of additional information or analysis. Typically, however, the starting point serves as an anchor or drag that reduces the amount of adjustment, so the final estimate remains closer to the starting point than it ought to be.

Whenever analysts move into a new analytical area and take over responsibility for updating a series of judgments or estimates made by their predecessors, the previous judgments may have such an anchoring effect. Even when analysts make their own initial judgment, and then attempt to revise this judgment on the basis of new information or further analysis, there is much evidence to suggest that they usually do not change the judgment enough.

Anchoring provides a partial explanation of experiments showing that analysts tend to be overly sure of themselves in setting confidence ranges. A military analyst who estimates future missile or tank production is often unable to give a specific figure as a point estimate.

Reasons for the anchoring phenomenon are not well understood. The initial estimate serves as a hook on which people hang their first impressions or the results of earlier calculations. In recalculating, they take this as a starting point rather than starting over from scratch, but why this should limit the range of subsequent reasoning is not clear.

There is some evidence that awareness of the anchoring problem is not an adequate antidote. This is a common finding in experiments dealing with cognitive biases. The biases persist even after test subjects are informed of them and instructed to try to avoid them or compensate for them.

One technique for avoiding the anchoring bias, to weigh anchor so to speak, may be to ignore one’s own or others’ earlier judgments and rethink a problem from scratch.

In other words, consciously avoid any prior judgment as a starting point. There is no experimental evidence to show that this is possible or that it will work, but it seems worth trying. Alternatively, it is sometimes possible to avoid human error by employing formal statistical procedures.

Expression of Uncertainty

Probabilities may be expressed in two ways. Statistical probabilities are based on empirical evidence concerning relative frequencies. Most intelligence judgments deal with one-of-a-kind situations for which it is impossible to assign a statistical probability. Another approach commonly used in intelligence analysis is to make a “subjective probability” or “personal probability” judgment. Such a judgment is an expression of the analyst’s personal belief that a certain explanation or estimate is correct. It is comparable to a judgment that a horse has a three-to-one chance of winning a race.

When intelligence conclusions are couched in ambiguous terms, a reader’s interpretation of the conclusions will be biased in favor of consistency with what the reader already believes.

The main point is that an intelligence report may have no impact on the reader if it is couched in such ambiguous language that the reader can easily interpret it as consistent with his or her own preconceptions. This ambiguity can be especially troubling when dealing with low-probability, high-impact dangers against which policymakers may wish to make contingency plans.

How can analysts express uncertainty without being unclear about how certain they are? Putting a numerical qualifier in parentheses after the phrase expressing degree of uncertainty is an appropriate means of avoiding misinterpretation. This may be an odds ratio (less than a one-in-four chance) or a percentage range (5 to 20 percent) or (less than 20 percent). Odds ratios are often preferable, as most people have a better intuitive understanding of odds than of percentages.

Assessing Probability of a Scenario

Intelligence analysts sometimes present judgments in the form of a scenario–a series of events leading to an anticipated outcome. There is evidence that judgments concerning the probability of a scenario are influenced by amount and nature of detail in the scenario in a way that is unrelated to actual likelihood of the scenario.

A scenario consists of several events linked together in a narrative description. To calculate mathematically the probability of a scenario, the proper procedure is to multiply the probabilities of each individual event. Thus, for a scenario with three events, each of which will probably (70 percent certainty) occur, the probability of the scenario is .70 x .70 x .70 or slightly over 34 percent. Adding a fourth probable (70 percent) event to the scenario would reduce its probability to 24 percent.

Most people do not have a good intuitive grasp of probabilistic reasoning. One approach to simplifying such problems is to assume (or think as though) one or more probable events have already occurred. This eliminates some of the uncertainty from the judgment.

When the averaging strategy is employed, highly probable events in the scenario tend to offset less probable events. This violates the principle that a chain cannot be stronger than its weakest link. Mathematically, the least probable event in a scenario sets the upper limit on the probability of the scenario as a whole. If the averaging strategy is employed, additional details may be added to the scenario that are so plausible they increase the perceived probability of the scenario, while, mathematically, additional events must necessarily reduce its probability.

Base-Rate Fallacy

In assessing a situation, an analyst sometimes has two kinds of evidence available– specific evidence about the individual case at hand, and numerical data that summarize information about many similar cases. This type of numerical information is called a base rate or prior probability. The base-rate fallacy is that the numerical data are commonly ignored unless they illuminate a causal relationship.

Most people do not incorporate the prior probability into their reasoning because it does not seem relevant. It does not seem relevant because there is no causal relationship between the background information on the percentages of jet fighters in the area and the pilot’s observation.

The so-called planning fallacy, to which I personally plead guilty, is an example of a problem in which base rates are not given in numerical terms but must be abstracted from experience. In planning a research project, I may estimate being able to complete it in four weeks. This estimate is based on relevant case-specific evidence: desired length of report, availability of source materials, difficulty of the subject matter, allowance for both predictable and unforeseeable interruptions, and so on. I also possess a body of experience with similar estimates I have made in the past. Like many others, I almost never complete a research project within the initially estimated time frame! But I am seduced by the immediacy and persuasiveness of the case- specific evidence. All the causally relevant evidence about the project indicates I should be able to complete the work in the time allotted for it. Even though I know from experience that this never happens, I do not learn from this experience. I continue to ignore the non-causal, probabilistic evidence based on many similar projects in the past, and to estimate completion dates that I hardly ever meet. (Preparation of this book took twice as long as I had anticipated. These biases are, indeed, difficult to avoid!)

Chapter 13

Hindsight Biases in Evaluation of Intelligence Reporting

Evaluations of intelligence analysis–analysts’ own evaluations of their judgments as well as others’ evaluations of intelligence products–are distorted by systematic biases. As a result, analysts overestimate the quality of their analytical performance, and others underestimate the value and quality of their efforts. These biases are not simply the product of self-interest and lack of objectivity. They stem from the nature of human mental processes and are difficult and perhaps impossible to overcome.

Hindsight biases influence the evaluation of intelligence reporting in three ways:

  • Analysts normally overestimate the accuracy of their past judgments.
  • Intelligence consumers normally underestimate how much they learned from intelligence reports.
  • Overseers of intelligence production who conduct postmortem analyses of an intelligence failure normally judge that events were more readily foreseeable than was in fact the case.

The analyst, consumer, and overseer evaluating analytical performance all have one thing in common. They are exercising hindsight. They take their current state of knowledge and compare it with what they or others did or could or should have known before the current knowledge was received. This is in sharp contrast with intelligence estimation, which is an exercise in foresight, and it is the difference between these two modes of thought–hindsight and foresight–that seems to be a source of bias.

an analyst’s intelligence judgments are not as good as analysts think they are, or as bad as others seem to believe. Because the biases generally cannot be overcome, they would appear to be facts of life that analysts need to take into account in evaluating their own performance and in determining what evaluations to expect from others. This suggests the need for a more systematic effort to:

  • Define what should be expected from intelligence analysts.
  • Develop an institutionalized procedure for comparing intelligence judgments and estimates with actual outcomes.
  • Measure how well analysts live up to the defined expectations.

The discussion now turns to the experimental evidence demonstrating these biases from the perspective of the analyst, consumer, and overseer of intelligence.

The Analyst’s Perspective

Analysts interested in improving their own performance need to evaluate their past estimates in the light of subsequent developments. To do this, analysts must either remember (or be able to refer to) their past estimates or must reconstruct their past estimates on the basis of what they remember having known about the situation at the time the estimates were made.

Experimental evidence suggests a systematic tendency toward faulty memory of past estimates. That is, when events occur, people tend to overestimate the extent to which they had previously expected them to occur. And conversely, when events do not occur, people tend to underestimate the probability they had previously assigned to their occurrence. In short, events generally seem less surprising than they should on the basis of past estimates. This experimental evidence accords with analysts’ intuitive experience. Analysts rarely appear–or allow themselves to appear–very surprised by the course of events they are following.

The Consumer’s Perspective

When consumers of intelligence reports evaluate the quality of the intelligence product, they ask themselves the question: “How much did I learn from these reports that I did not already know?” In answering this question, there is a consistent tendency for most people to underestimate the contribution made by new information.

people tend to underestimate both how much they learn from new information and the extent to which new information permits them to make correct judgments with greater confidence. To the extent that intelligence consumers manifest these same biases, they will tend to underrate the value to them of intelligence reporting.

The Overseer’s Perspective

An overseer, as the term is used here, is one who investigates intelligence performance by conducting a postmortem examination of a high-profile intelligence failure.

Such investigations are carried out by Congress, the Intelligence Community staff, and CIA or DI management. For those outside the executive branch who do not regularly read the intelligence product, this sort of retrospective evaluation of known intelligence failures is a principal basis for judgments about the quality of intelligence analysis.

A fundamental question posed in any postmortem investigation of intelligence failure is this: Given the information that was available at the time, should analysts have been able to foresee what was going to happen? Unbiased evaluation of intelligence performance depends upon the ability to provide an unbiased answer to this question.

The experiments reported in the following paragraphs tested the hypotheses that knowledge of an outcome increases the perceived inevitability of that outcome, and that people who are informed of the outcome are largely unaware that this information has changed their perceptions in this manner.

An average of all estimated outcomes in six sub-experiments (a total of 2,188 estimates by 547 subjects) indicates that the knowledge or belief that one of four possible outcomes has occurred approximately doubles the perceived probability of that outcome as judged with hindsight as compared with foresight.

The fact that outcome knowledge automatically restructures a person’s judgments about the relevance of available data is probably one reason it is so difficult to reconstruct how our thought processes were or would have been without this outcome knowledge.

These results indicate that overseers conducting postmortem evaluations of what analysts should have been able to foresee, given the available information, will tend to perceive the outcome of that situation as having been more predictable than was, in fact, the case. Because they are unable to reconstruct a state of mind that views the situation only with foresight, not hindsight, overseers will tend to be more critical of intelligence performance than is warranted.

Discussion of Experiments

Experiments that demonstrated these biases and their resistance to corrective action were conducted as part of a research program in decision analysis funded by the Defense Advanced Research Projects Agency. Unfortunately, the experimental subjects were students, not members of the Intelligence Community. There is, nonetheless, reason to believe the results can be generalized to apply to the Intelligence Community. The experiments deal with basic human mental processes, and the results do seem consistent with personal experience in the Intelligence Community. In similar kinds of psychological tests, in which experts, including intelligence analysts, were used as test subjects, the experts showed the same pattern of responses as students.

One would expect the biases to be even greater in foreign affairs professionals whose careers and self-esteem depend upon the presumed accuracy of their judgments.

Can We Overcome These Biases?

Analysts tend to blame biased evaluations of intelligence performance at best on ignorance and at worst on self-interest and lack of objectivity. Both these factors may also be at work, but the experiments suggest the nature of human mental processes is also a principal culprit. This is a more intractable cause than either ignorance or lack of objectivity.

in these experimental situations the biases were highly resistant to efforts to overcome them. Subjects were instructed to make estimates as if they did not already know the answer, but they were unable to do so. One set of test subjects was briefed specifically on the bias, citing the results of previous experiments. This group was instructed to try to compensate for the bias, but it was unable to do so. Despite maximum information and the best of intentions, the bias persisted.

This intractability suggests the bias does indeed have its roots in the nature of our mental processes. Analysts who try to recall a previous estimate after learning the actual outcome of events, consumers who think about how much a report has added to their knowledge, and overseers who judge whether analysts should have been able to avoid an intelligence failure, all have one thing in common. They are engaged in a mental process involving hindsight. They are trying to erase the impact of knowledge, so as to remember, reconstruct, or imagine the uncertainties they had or would have had about a subject prior to receipt of more or less definitive information.

There is one procedure that may help to overcome these biases. It is to pose such questions as the following: Analysts should ask themselves, “If the opposite outcome had occurred, would I have been surprised?” Consumers should ask, “If this report had told me the opposite, would I have believed it?” And overseers should ask, “If the opposite outcome had occurred, would it have been predictable given the information that was available?” These questions may help one recall or reconstruct the uncertainty that existed prior to learning the content of a report or the outcome of a situation.

PART IV—CONCLUSIONS

Chapter 14

Improving Intelligence Analysis

This chapter offers a checklist for analysts–a summary of tips on how to navigate the minefield of problems identified in previous chapters. It also identifies steps that managers of intelligence analysis can take to help create an environment in which analytical excellence can flourish.

Checklist for Analysts

This checklist for analysts summarizes guidelines for maneuvering through the minefields encountered while proceeding through the analytical process. Following the guidelines will help analysts protect themselves from avoidable error and improve their chances of making the right calls. The discussion is organized around six key steps in the analytical process: defining the problem, generating hypotheses, collecting information, evaluating hypotheses, selecting the most likely hypothesis, and the ongoing monitoring of new information.

Defining the Problem

Start out by making certain you are asking–or being asked–the right questions. Do not hesitate to go back up the chain of command with a suggestion for doing something a little different from what was asked for. The policymaker who originated the requirement may not have thought through his or her needs, or the requirement may be somewhat garbled as it passes down through several echelons of management.

Generating Hypotheses

Identify all the plausible hypotheses that need to be considered. Make a list of as many ideas as possible by consulting colleagues and outside experts. Do this in a brainstorming mode, suspending judgment for as long as possible until all the ideas are out on the table.

At this stage, do not screen out reasonable hypotheses only because there is no evidence to support them. This applies in particular to the deception hypothesis. If another country is concealing its intent through denial and deception, you should probably not expect to see evidence of it without completing a very careful analysis

of this possibility. The deception hypothesis and other plausible hypotheses for which there may be no immediate evidence should be carried forward to the next stage of analysis until they can be carefully considered and, if appropriate, rejected with good cause.

Collecting Information

Relying only on information that is automatically delivered to you will probably not solve all your analytical problems. To do the job right, it will probably be necessary to look elsewhere and dig for more information. Contact with the collectors, other Directorate of Operations personnel, or first-cut analysts often yields additional information. Also check academic specialists, foreign newspapers, and specialized journals.

Collect information to evaluate all the reasonable hypotheses, not just the one that seems most likely. Exploring alternative hypotheses that have not been seriously considered before often leads an analyst into unexpected and unfamiliar territory. For example, evaluating the possibility of deception requires evaluating another country’s or group’s motives, opportunities, and means for denial and deception. This, in turn, may require understanding the strengths and weaknesses of US human and technical collection capabilities.

It is important to suspend judgment while information is being assembled on each of the hypotheses. It is easy to form impressions about a hypothesis on the basis of very little information, but hard to change an impression once it has taken root. If you find yourself thinking you already know the answer, ask yourself what would cause you to change your mind; then look for that information.

Try to develop alternative hypotheses in order to determine if some alternative–when given a fair chance–might not be as compelling as your own preconceived view. Systematic development of an alternative hypothesis usually increases the perceived likelihood of that hypothesis. “A willingness to play with material from different angles and in the context of unpopular as well as popular hypotheses is an essential ingredient of a good detective, whether the end is the solution of a crime or an intelligence estimate”.

Evaluating Hypotheses

Do not be misled by the fact that so much evidence supports your preconceived idea of which is the most likely hypothesis. That same evidence may be consistent with several different hypotheses. Focus on developing arguments against each hypothesis rather than trying to confirm hypotheses. In other words, pay particular attention to evidence or assumptions that suggest one or more hypotheses are less likely than the others.

Assumptions are fine as long as they are made explicit in your analysis and you analyze the sensitivity of your conclusions to those assumptions. Ask yourself, would different assumptions lead to a different interpretation of the evidence and different conclusions?

Do not assume that every foreign government action is based on a rational decision in pursuit of identified goals. Recognize that government actions are sometimes best explained as a product of bargaining among semi-independent bureaucratic entities, following standard operating procedures under inappropriate circumstances, unintended consequences, failure to follow orders, confusion, accident, or coincidence.

Selecting the Most Likely Hypothesis

Proceed by trying to reject hypotheses rather than confirm them. The most likely hypothesis is usually the one with the least evidence against it, not the one with the most evidence for it.

In presenting your conclusions, note all the reasonable hypotheses that were considered.

Ongoing Monitoring

In a rapidly changing, probabilistic world, analytical conclusions are always tentative. The situation may change, or it may remain unchanged while you receive new information that alters your understanding of it. Specify things to look for that, if observed, would suggest a significant change in the probabilities.

Pay particular attention to any feeling of surprise when new information does not fit your prior understanding. Consider whether this surprising information is consistent with an alternative hypothesis. A surprise or two, however small, may be the first clue that your understanding of what is happening requires some adjustment, is at best incomplete, or may be quite wrong.

Management of Analysis

The cognitive problems described in this book have implications for the management as well as the conduct of intelligence analysis. This concluding section looks at what managers of intelligence analysis can do to help create an organizational environment in which analytical excellence flourishes. These measures fall into four general categories: research, training, exposure to alternative mind-sets, and guiding analytical products.

Support for Research

Management should support research to gain a better understanding of the cognitive processes involved in making intelligence judgments. There is a need for better understanding of the thinking skills involved in intelligence analysis, how to test job applicants for these skills, and how to train analysts to improve these skills. Analysts also need a fuller understanding of how cognitive limitations affect intelligence analysis and how to minimize their impact. They need simple tools and techniques to help protect themselves from avoidable error. There is so much research to be done that it is difficult to know where to start.

Training

Most training of intelligence analysts is focused on organizational procedures, writing style, and methodological techniques. Analysts who write clearly are assumed to be thinking clearly. Yet it is quite possible to follow a faulty analytical process and write a clear and persuasive argument in support of an erroneous judgment.

More training time should be devoted to the thinking and reasoning processes involved in making intelligence judgments, and to the tools of the trade that are available to alleviate or compensate for the known cognitive problems encountered in analysis. This book is intended to support such training.

It would be worthwhile to consider how an analytical coaching staff might be formed to mentor new analysts or consult with analysts working particularly difficult issues. One possible model is the SCORE organization that exists in many communities. SCORE stands for Senior Corps of Retired Executives. It is a national organization of retired executives who volunteer their time to counsel young entrepreneurs starting their own businesses.

New analysts could be required to read a specified set of books or articles relating to analysis, and to attend a half-day meeting once a month to discuss the reading and other experiences related to their development as analysts. A comparable voluntary program could be conducted for experienced analysts. This would help make analysts more conscious of the procedures they use in doing analysis. In addition to their educational value, the required readings and discussion would give analysts a common experience and vocabulary for communicating with each other, and with management, about the problems of doing analysis.

My suggestions for writings that would qualify for a mandatory reading program include: Robert Jervis’ Perception and Misperception in International Politics (Princeton University Press, 1977); Graham Allison’s Essence of Decision: Explaining the Cuban Missile Crisis (Little, Brown, 1971); Ernest May’s “Lessons” of the Past: The Use and Misuse of History in American Foreign Policy (Oxford University Press, 1973); Ephraim Kam’s, Surprise Attack (Harvard University Press, 1988); Richard Betts’ “Analysis, War and Decision: Why Intelligence Failures Are Inevitable,” World Politics, Vol. 31, No. 1 (October 1978); Thomas Kuhn’s The Structure of Scientific Revolutions (University of Chicago Press, 1970); and Robin Hogarth’s Judgement and Choice (John Wiley, 1980). Although these were all written many years ago, they are classics of permanent value. Current analysts will doubtless have other works to recommend. CIA and Intelligence Community postmortem analyses of intelligence failure should also be part of the reading program.

To encourage learning from experience, even in the absence of a high-profile failure, management should require more frequent and systematic retrospective evaluation of analytical performance. One ought not generalize from any single instance of a correct or incorrect judgment, but a series of related judgments that are, or are not, borne out by subsequent events can reveal the accuracy or inaccuracy of the analyst’s mental model. Obtaining systematic feedback on the accuracy of past judgments is frequently difficult or impossible, especially in the political intelligence field. Political judgments are normally couched in imprecise terms and are generally conditional upon other developments. Even in retrospect, there are no objective criteria for evaluating the accuracy of most political intelligence judgments as they are presently written.

In the economic and military fields, however, where estimates are frequently concerned with numerical quantities, systematic feedback on analytical performance is feasible. Retrospective evaluation should be standard procedure in those fields in which estimates are routinely updated at periodic intervals. The goal of learning from retrospective evaluation is achieved, however, only if it is accomplished as part of an objective search for improved understanding, not to identify scapegoats or assess blame. This requirement suggests that retrospective evaluation should be done routinely within the organizational unit that prepared the report, even at the cost of some loss of objectivity.

Exposure to Alternative Mind-Sets

The realities of bureaucratic life produce strong pressures for conformity. Management needs to make conscious efforts to ensure that well-reasoned competing views have the opportunity to surface within the Intelligence Community. Analysts need to enjoy a sense of security, so that partially developed new ideas may be expressed and bounced off others as sounding boards with minimal fear of criticism for deviating from established orthodoxy.

Intelligence analysts have often spent less time living in and absorbing the culture of the countries they are working on than outside experts on those countries. If analysts fail to understand the foreign culture, they will not see issues as the foreign government sees them. Instead, they may be inclined to mirror-image–that is, to assume that the other country’s leaders think like we do. The analyst assumes that the other country will do what we would do if we were in their shoes.

Mirror-imaging is a common source of analytical error,

Pre-publication review of analytical reports offers another opportunity to bring alternative perspectives to bear on an issue. Review procedures should explicitly question the mental model employed by the analyst in searching for and examining evidence. What assumptions has the analyst made that are not discussed in the draft itself, but that underlie the principal judgments? What alternative hypotheses have been considered but rejected, and for what reason? What could cause the analyst to change his or her mind?

Ideally, the review process should include analysts from other areas who are not specialists in the subject matter of the report. Analysts within the same branch or division often share a similar mind-set. Past experience with review by analysts from other divisions or offices indicates that critical thinkers whose expertise is in other areas make a significant contribution. They often see things or ask questions that the author has not seen or asked. Because they are not so absorbed in the substance, they are better able to identify the assumptions and assess the argumentation, internal consistency, logic, and relationship of the evidence to the conclusion. The reviewers also profit from the experience by learning standards for good analysis that are independent of the subject matter of the analysis.

Guiding Analytical Products

On key issues, management should reject most single-outcome analysis–that is, the single-minded focus on what the analyst believes is probably happening or most likely will happen.

One guideline for identifying unlikely events that merit the specific allocation of resources is to ask the following question: Are the chances of this happening, however small, sufficient that if policymakers fully understood the risks, they might want to make contingency plans or take some form of preventive or preemptive action? If the answer is yes, resources should be committed to analyze even what appears to be an unlikely outcome.

Finally, management should educate consumers concerning the limitations as well as the capabilities of intelligence analysis and should define a set of realistic expectations as a standard against which to judge analytical performance.

The Bottom Line

Analysis can be improved! None of the measures discussed in this book will guarantee that accurate conclusions will be drawn from the incomplete and ambiguous information that intelligence analysts typically work with. Occasional intelligence failures must be expected. Collectively, however, the measures discussed here can certainly improve the odds in the analysts’ favor.

Review of American Marxism

Mark R. Levin is a former advisor to several members of President Ronald Reagan’s cabinet, chairman on the Landmark Legal Foundation and a syndicated commentator on television and radio. His stated goal for writing American Marxism is to examine the history of Marxist activist networks operating in the United States. Having sold over one million copies since its publication in 2021 it is likely the most widely read of all “history” books on Marxist political groups in the United States. This is unfortunate. While Levin’s analysis of discourse is skillful and his knowledge of the political efforts led by Communist Party state, the book largely fails to achieve the goal of a distinctly American network analysis and historiography of American Marxists. I’ll alternate my commentary on what’s praiseworthy and what’s blameworth.

Levin deftly highlight how different movements, such as degrowth, environmental justice, racial justice, gender justice, etc. are political projects that are frequently directed by self-avowed or crypto Marxists whose claims justifying the necessity for political change have no social/scientific validity and whose demands are just thinly coded Marxism.

In chapter after chapter Levin shows how the zero-growth movement is anti-capitalism in different language, how Critical Race Theory discourse of authors such as Ibram X. Kendi and Robin DeAngelo is a variant of Marxism that uses racial language and pulls quotes from numerous books such as Laura Miles, a contributor to Social Review magazine and author of Transgender Resistance: Socialism and the Fight for Trans Liberation, Professor of Economics at Pepperdine University George Reisman’s book Capitalism, and professor of Environmental Justice at University of California David Pellow’s essay What is Critical Environmental Justice? to highlight how the ‘trans liberation, ‘economic justice’ and the degrowth wing of the ‘environmental justice’ movement is really socialism.

He also cites research articles such as one showing that the New York Times and Washington Post’s meteoric rise in the percentage of instances that “white” and racial privilege” has appeared in their publications – 1200% and 1500% respectively, case studies by the Media Research Center and highly publicized events such as the suppression of Project Veritas’ Twitter account and the Hunter Biden laptop story to demonstrate mainstream media’s role in promoting conspiracy theories (i.e. there is no effort in Big Tech to limit speech and the story about Hunter Biden’s laptop is Russian disinformation) in the name of discouraging conspiracy theories.

Levin deftly shows how legislation linked to the Green New Deal is an effort at refashioning the individual rights framework of the U.S. Constitution to a collectivist definition of rights; how reparations legislation legitimizes racial discrimination that undermines the meritocratic structure of social, educational and financial institutions; how the legislative movement to “protect” people from “hate speech” which occurs online is a trojan horse for massive government intervention in the Free Speech arena, how colleges are increasingly locations for political recruitment, etc.

This is all to say that there is a lot of good research here. And yet despite this to me the book feels as if themes/topics were collected, examples were chosen to cover the themes/topics, and little thought was given to explication of the connection between. One example of this could be how several years before the current “attack” on standardization in testing a similar process was ongoing in Venezuela, how several of the books now promoted in the teaching of “math justice” cite attendance at the World Social Forum or cite Communists such a Angela Davis as their inspiration, and that high ranking members of unions in Chicago, Los Angeles, Boston, Texas, etc. have all gone to Venezuela for consultations with members of their government. This would allows to maintain the frame within which he operates – people pursuing the above agendas are indeed utopians, AND the first which sustains their actions was galvanized in part by foreigners with their own political ends. In other words, by establishing the linkages to foreign inspiration, gives us cause to then assess whether or not what we see there is truly all that venerable and if not to highlight how those within borders X must even more fervently resist the effort of internal groups to change conditions so they approximate those in borders Y.

One example of this is returning focus on the thoughts and actions of Marx, Mao, Marcuse, and Stalin, for example, rather than detailing the contemporary activities of Socialist sections of various academic professional organizations and their linkages to international networks results in an analysis – such as the Cuban developed Network of Artists and Intellectuals in Defense of Humanities – or their indicators – such as the number of socialists philosophers cited within academic literature or the number of self-avowedly socialist presidents of professional academic organizations – means that he fails to capture a full picture of their undertakings. Furthermore, examples depicting how these groups have co-operated organizationally within the context of recent history is not examined, the few details of links to foreign governments highlights are only to a few events that are sensationalist but of minor importance, and the funds which these groups rely on to operate are only slightly analyzed.

To summarize, Levin foregoes extensive engagement with Marxist sociology: the examination of the histories and primary texts of actually existing parties and movements and instead focuses on texts claimed as seminal to these movements and events that occurred in other countries. While knowledge of “what happened elsewhere” is clearly useful for developing insight on the U.S. groups, there’s clearly no formal methodology shown to be informing his particular mode of investigation.

American Marxism is definitely worth reading for getting an overview of the deceptive practices of various justice movements – these are largely wings of a popular front – and yet the definitive book about U.S. Socialist movements since the collapse of the Soviet Union has yet to be written.

Notes from Bringing Intelligence About: Practitioners Reflect on Best Practices

Bringing Intelligence About: Practitioners Reflect on Best Practices

Russell G. Swenson, Editor

With a Foreword by Mark M. Lowenthal, Assistant Director of Central Intelligence

INTRODUCTION

Russell G. Swenson with David T. Moore and Lisa Krizan

This book is the product of studious self-reflection by currently serving intelligence professionals, as well as by those who are in a position, with recent experience and continuing contacts, to influence the development of succeeding generations of intelligence personnel. Contributors to this book represent eight of the fourteen organizations that make up the National Foreign Intelligence Community. A positive image of a community of professionals, engaged in public service, and concerned about continuous self- improvement through “best practices,” emerges from these pages.

Community partners, such as the Central Intelligence Agency (CIA), Defense Intelligence Agency (DIA), the National Security Agency (NSA), and the State Department’s Bureau of Intelligence and Research (INR), for example, share responsibilities for national security issues that allow individual collectors, analysts, issue managers and offices to work together on interagency task forces.

with its strategic focus, the Intelligence Community expects to be forward-looking, envisioning future developments and their repercussions, whereas law enforcement intelligence efforts have typically focused on exploiting pattern analysis to link together the extralegal behavior of individuals and organizations with clear and legally acceptable evidence.

a facile claim of significant differences between law enforcement and national security intelligence may hold up to scrutiny only in terms of the scale of operations supported rather than professional intelligence techniques employed.  We may infer from these observations that the principles of intelligence collection and analysis addressed in this book will apply to intelligence creation in the broadly overlapping cultures of law enforcement and national security intelligence.

The U.S. Intelligence Community was subject during the 1990s to a congressionally mandated reduction in Intelligence Community personnel levels.2 This reduction occurred despite numerous small wars and the continuation of international criminal activity during the decade. When dissenters, such as former Director of Central Intelligence James Woolsey, “talked about the proliferators, traffickers, terrorists, and rogue states as the serpents that came in the wake of the slain Soviet dragon, [they were] accused of ‘creating threats’ to jus- tify an inflated intelligence budget.”3 Even government reports such as that of the United States Commission on National Security (commonly referred to as the Hart-Rudman Report), which warned of catastrophic attacks against the American homeland and a need for vigilance, were dismissed.4

even though collection methods are often arcane, methods of analysis are not very esoteric. Analytic methods used by intelligence analysts are readily available to specialists in the academic world.6 The commonalities that do exist among collectors and analysts across the Community have rarely been noted in intelligence literature. The essays in this book will help fill that gap, and should illuminate for non-specialists the important role of self-reflection among intelligence professionals who remain in government service.

6 Many if not most analysts have been exposed, by training or experimentation, to such techniques as link analysis, the Delphi technique, and analysis of competing hypotheses. Morgan D. Jones, former CIA analyst, has distilled the less structured techniques that intelligence analysts may employ in The Thinker’s Toolkit: 14 Powerful Techniques for Problem Solving (New York: Times Business, Random House, 1995 and 1998)

It may now be true that the value of intelligence to consumers is more dependent on the evaluation of information (grappling with mysteries) than on discovering “secrets.”7 If so, then the evaluation of social trends in various regions might best begin with systematic exploitation of authentic or “grass-roots” reporting from newspapers and other mass media.

language capabilities are indispensable for any country’s intelligence personnel who seek insights through indigenous mass media. Language capabilities must mirror those tongues used across the electronic media that represent the target entities

Garin defines the intelligence corporation in terms of a “learning organization” and then applies external standards from the Baldrige National Quality Program to selected intelligence-producing offices within the Defense Intelligence Agency. This benchmarking study not only identifies best practices, but also shows how such professional standards could be used to identify exemplary offices or individuals across the entire Intelligence Community.

If a communitarian ethos distinguishes intelligence professionals from their more individualistic and self-absorbed brethren in academia, then self- reflection among intelligence practitioners can also easily become a communal good. Tension between a communitarian and individualistic ethos can resolve itself among intelligence professionals through the strength of their bureaucratic (Weberian), nonmonastic tradition. The essays in this volume illustrate how, through self-reflection, that tension may be resolved. For example, individual professionals can easily spell out connections among these essays that would quickly move the discussion to a classified realm—into their “culture.”

That culture is typically characterized by fast-moving events and requirements that preclude introspection about the phenomena of intelligence collection and production.

Self-reflection not only allows the various agency sub-cultures to be displayed, as portrayed here, but also allows “insiders” to realize the subtle connections of their individual work to the overall enterprise. As a further illustration of this principle, the intense intellectual effort that characterized earlier eras of intelligence production and that continues as a part of the enduring culture, is evoked by the observations of William Millward, a World War II intelligence analyst at the UK’s Bletchley Park:

[Analysis] means reviewing the known facts, sorting out significant from insignificant, assessing them severally and jointly, and arriving at a conclusion by the exercise of judgment: part induction, part deduction. Absolute intellectual honesty is essential. The process must not be muddied by emotion or prejudice, nor by a desire to please.9

National intelligence collection management and intelligence analysis remain inherently government functions, and privatized intelligence—with its prospect of reduced congressional oversight—is even more antagonistic to the communal sharing of information than are the more stringently overseen bureaucratic fiefdoms.

to “bring intelligence about” from the point of view of the American people requires peeling back some of the thick mantle of secrecy that has shrouded individual initiatives and management approaches—Community best practices—employed in the execution of ordinary and extraordinary tasks. Readers who look closely at the observations set down by the authors here will find a serviceable tool for unwrapping some of the otherwise enigmatic enthusiasms and motivations of government intelligence professionals.

THE INTELLIGENCE PRO AND THE PROFESSOR: TOWARD AN ALCHEMY OF APPLIED ARTS AND SCIENCES

Pauletta Otis

Recent events have led to an increasing realization that all of the resources of the American public must be engaged in support of national security. The U.S. academic community and the U.S. National Foreign Intelligence Community are institutional managers of “information as power.” Yet their potential for useful cooperation and collaboration is unrealized and the relationship between the communities continues to be somewhat strained. This relationship has been the topic of discussions for many years in a somewhat benign security environment. With dramatic changes in that environment in late 2001, maximizing resources through enhanced cooperation and collaboration assumes critical importance.

From the academic’s point of view, the U.S. Government’s stability and its concern for the common welfare derive from the First Amendment. If a stable democracy depends on an informed public, the public cannot make good decisions without accurate and timely information. Without the “self-correcting” mechanism of free speech, a government tends to be subject to centrifugal forces, to spin on its own axis, and to become an entity quite separate from the citizenry.

Yet, on a practical level, we need to “keep secrets” from U.S. enemies, however they may be defined. A certain level of secrecy is required to protect national interests. How much secrecy is “best” in a democracy is subject to ongoing debate.11 The pendulum swings over time and in response to perceived threats to national security.

COMMONALITIES

Both the IC and the academic community work in the world of word-power. Both are populated by dedicated and committed Americans. In most cases, cooperative efforts have yielded the “best” for the United States in terms of public policy and decisionmaking. Yet, at other times, the communities have been at serious odds. The ensuing problems have been of significant concern to the attentive public and detrimental to the common good. These problems emanate not only from the theory of democracy but from competition over the nature of truth and value for the country. They can also be traced to concern about the nature and processes of intelligence gathering.

The IC and the academic community have much in common: both realize that information is power and that power comes in the form of words. These words form the basis of cultural assumptions, common concepts, and grand theory. In terms of information handling minutiae, intelligence specialists and academic scholars do much the same thing. They research, write, explain, and predict with excruciating self-consciousness and realization of the power of words. Intelligence professionals and academics contribute to pub- lic discussion in both oral forums and in written format. Intelligence professionals and academics are valued and participant citizens: They attend common religious services, participate in the education of children, contribute to community service organizations, and talk to neighbors. At this level, both are truly in the same “game.” In the public domain, both intelligence professionals and academics contribute information, analysis, and opinions. These are read and integrated by the public, by policymakers, and by the “other” community. This common body of public information and analysis is at the core of intelligent decisionmaking in the United States.

For 50-plus years, relations between the IC and academia have wavered between cooperation and competition. The basic tension inherent in keeping “secrets in a democracy” has been played out in specific activities and attitudes.

Between 1939 and 1961, Yale University was overtly supportive of the IC.15 Students and faculty were participants in intelligence collection and analysis. Study centers were financially supported and professional journals published. Information was collected and archived (including the Human Relations Area Files). The money was channeled to individuals or to institutions but seemed to raise few concerns.16 At the political level, the existence of a “community” of intelligence-related agencies and offices has been noted in legislation, suggesting wide acceptance of the idea of government intelligence operations as regularized institutions with special taskings in support of U.S. national security.17

CONTINUITY, CHANGE, AND THE DEVELOPMENT OF DIFFERENCES

The “position descriptions” of an intelligence specialist and of an academic are polar opposites: The basic job of an academic is to find information, collect factoids whether they are useful or not–and scatter the information to the winds hoping and praying to make more sense of the ensuing chaos. Intelligence specialists collect similar factoids for very specific productive purposes. Whereas the academic is spreading information, the intelligence professional is collecting, narrowing, and refining data to produce reports, even if at the same time continuing an academic interest in the subject. The academic may learn for the fun of it; the intelligence professional prefers, or learns to be able to actually do something with the information. In the long run, the academic must write, produce, and contribute to the body of professional literature, but there is not the pressure for immediate production that the intelligence professional must face. The academic can say “that is interesting”; the intelligence professional must certify that “it is important.”

During the Vietnam era, the question was raised in another context: Information concerning populations in Laos, Cambodia, and Vietnam, collected by anthropologists, was subsequently used in support of the war effort.

during the Cold War, the strain between academics and the IC contributed to a number of very ugly scenes. Distinctions were made between those academics who were “patriotic, loyal, and anti-communist,” and those who challenged authority in any context. Simple questioning became a sign of disloyalty.

Administrative penalties for individual scholars included blacklisting, failure to get tenure, lack of research support, and even dismissal from teaching positions. The public presentation of “truth” was simple: teaching loyalty to the country was the real job of the teacher/ aca- demic. Anything contrary was a serious and significant challenge to the future of the nation. And, if the public supported the university both financially and with its “children,” it in turn should be able to expect loyalty on the part of its faculty employees.

Both the intelligence professional and the academic are, to some extent, prisoners of their own bureaucracies and are habitually resistant to change.

Each community has a body of common knowledge which it assumes everyone working within the institutional framework knows and accepts. Because common knowledge is “common,” it is not self-conscious and most individuals do not even realize that basic assumptions are seldom challenged. For example: the academic community assumes that books are good and thus libraries should have them. Individuals in the IC often assume that books are inherently historical and that current information and intelligence must be “near-real-time” and continually refreshed. There is a distinct attitude of arrogance when these individuals discuss the currency of information with academics. What is not at issue, of course, is that the intelligence professional has to produce and is charged with an indications and warning responsibility mandating inclusion of current information.

Academics are known for jargon and obfuscation; the IC is known for producing acronyms seemingly at leisure. The tribal languages are difficult to learn without a willing translator.

Academics are notoriously fractious and democratic. Anyone can challenge anyone on any idea at any time–and do so. Sometimes it is not very polite or courteous even when the comment begins with: “I understand your premise, but, have you considered…..?” Bloodletting is the rule rather than the exception. Members of the IC appear to be more polite although “outsiders” may assume that they derive enjoyment from more discreet disagreement. The various forms of interaction are notable when individuals from the IC quietly participate in sections of the International Studies Association or the Political Science Association, or when academics vociferously contribute to conferences held under the auspices of the IC.

There is further the problem of “group-think” on both sides. Each assumes not infrequently that they have a lock on reality and often simply refuse to be open-minded. This occurred during the Vietnam era when the IC felt itself misunderstood and abused, and the academic community believed that its information and personnel were being used by intelligence agencies in ways of which academe did not approve. The complexity of the story is well-known but the mythology remains extant.19 Part of the mythology is that intelligence agencies will collect information from academics but not return or share information in a collegial manner.

The academic community believes that it is not afflicted with the IC’s tunnel vision and can contribute to lateral thinking that supports enhanced problem-solving capabilities.

THE ACADEMIC’S DECISION

The motivations for an academic to work for the IC are, of course, individualistic and mixed. He or she may simply want a “steady” job that pays well–even if only working over the summer break. There can also be an ego boost when intelligence experts value the information and assessment done by an academic in a particular field. The value attached to being “appreciated” for the information and analysis cannot be overstated as there is very little of that inherent in the academic community.

A number of problems may emerge if an academic goes to work for the IC, becomes a contractor, or produces information that can be used by that community. He may “go native,” that is, become more “intelligence-centric” than the IC needs or desires, all in an attempt to “belong.” There is a tendency for self-delusion and thinking that the information contributed is going to “make a difference.” Sometimes it does; sometimes it is just part of the package provided for a specific tasking. The academic really has no way of knowing what impact his or her work is having.

There are special areas in which the academic can contribute to the IC. A primary contribution is in the collection of facts and information using sources that the IC is precluded from using. Academics can generally travel more easily than intelligence analysts and can establish human networks across international boundaries that can be very useful especially in times of crisis. Academics generally are trained in the development of analytical frameworks and employ them with a degree of ease and clarity. Many academics contribute a unique ability to critique or evaluate projects based on a wide reading of the literature combined with “on the ground” experience. The good academic is always a per- verse thinker–looking to examine the null hypothesis or suggesting alternative explanations. They tend not to accept the “generally accepted explanation.” This habit can make them a bit difficult as conversationalists especially if “critique” is translated as “criticism.” A good academic scholar will insist on “rigor,” as in following the requirement to test basic assumptions, validate criteria used in hypothesis building, monitor the internal validity of hypotheses, and pinpoint the public implications of certain theories. Academic rigor mandates “elegance”; that is, a hypothesis must be important, clear, and predictive. Even in descriptive analysis, an academic insists on looking at a range of possible alternative choices and at the internal coherence of the final product.

There is another caveat: most academics believe that intelligence professionals are better paid and have more job security for doing similar work.

The ultimate caveat: it is hard to explain to professional colleagues that “I am not a spy.” The mystique outlives the reality. It can be overcome with honesty and some degree of good humor.

There are specific benefits to the IC if academics and scholars are employed wisely. One of the often-heard criticisms of the IC is that it has a special susceptibility to “group- think” because analysts seldom have internal challenges. Projects, after all, are group-produced and tend to reduce an individual’s unique contribution in favor of the final product. The academic, simply by the nature of the job, challenges the norms, supports individual initiative and creativity, and values novel or unconventional thinking.

SUGGESTIONS FOR MUTUAL SUPPORT AND MUTUAL BENEFIT

For all of the reasons just noted, it is important that the IC encourage the participation of academics. Academics can make significant contributions in creative thinking, rigor of analysis, and professional presentation of products.

But the choice of which academic to invite as a participant and the venue for participation are not givens. The IC might do well to choose individual academics not for their geographic convenience, near to the Washington or New York areas, but for the written scholarly work produced in their “home environment.”

CONCLUSION

Academics and intelligence professionals are concerned with much the same set of problems. The approaches to problem solving may differ but certainly the practices inherent in American democratic tradition have constructed the intellectual environment so as to provide common definitions of contemporary threats and challenges. Both communities agree that liberty, democracy, freedom, equality, law, and property are defensible American values. Cooperation between the IC and academics, and specific contributions that academics can make to the production of good intelligence, can and must be further supported. It would be foolish to waste available skills and talent. There is too much at stake.

VIA THE INTERNET:
NEWS AND INFORMATION FOR THE ANALYST FROM NORTH AFRICAN ELECTRONIC MEDIA

John Turner

The remarkable growth of electronic media over the past decade in Francophone North Africa provides specialists in the region with a wealth of news and other information. This survey and analysis of the French-language media of Algeria, Morocco, and Tunisia examines how these media go about providing information on North African politics, economics, and culture in a way that does take account of expatriates, who make up an important sector of the politically active population. For “outsiders’’ who specialize in the interpretation of the region, news media offer a “grass-roots’’-based prism through which future developments may be anticipated.1 Observable trends in the North Africa media, when carefully documented and placed in context, validate the contention of various authors, from Robert Steele and Gregory Treverton to Arthur Hulnick, who have addressed the promises and perils of devoting a greater share of intelligence resources to the exploitation open-source information. They have found much more promise than peril.2

MEDIA ISSUES: OBJECTIVITY AND ACCURACY

The political heritage of North Africa and its impact on the region’s media culture together raise the inevitable questions of editorial freedom, objectivity and accuracy. The ideal standard for North African publications, as in other countries, is one of adherence to strict standards of reporting and analyzing events in a climate of universal press freedom. However, “red lines’’ exist in editorial freedom, objectivity, and accuracy that are perilous to cross. North African media have suffered negative sanctions imposed by government officials as the result of reporting that has broken taboos.19

Objectivity is a concern for analysts and researchers in a media culture where most papers, not only those owned by manifestly political groups, but also the independent press, are controlled by interests that have a marked political agenda. Nevertheless, in both Algeria and Morocco a system of checks and balances exists, with daily and weekly print media and their electronic counterparts in vigorous competition for the domestic and foreign market. These publications maintain high standards of journalism, and are normally quite open about their particular biases or journalistic objectives. Such competition and declared interests ensure a degree of confidence in political and economic analysis that allows a high degree of confidence among business and government consumers who seek information for decisionmaking. Tunisia, with its dearth of news dailies and strong government influence in the media, remains problematic for the area specialist seeking in- depth analysis of some events, although most economic and business reporting remains of high quality and supports its citizens’ decisionmaking effectively.20

Accuracy has historically been less of a concern. A flourishing media culture and rivalry among news dailies and weeklies in Algeria and Morocco ensures a degree of competition that normally means news items are reliable; that is, consistently reported in different publications with a sufficient degree of detail and verification of factual data to make them satisfactory records of events. In such an information-rich climate, attempts by government or private parties to plant stories would be subject to immediate scrutiny and detection by the press. The latter frequently analyzes government reporting on an issue and takes it to task for various shortcomings, thereby ensuring, through the resulting reliability, that validity or accuracy is also addressed.

For observers of North African affairs, trends in the French-language electronic news media are reassuring for the near future. Area specialists and other interested professionals will have a steadily growing body of information from which they can draw for a reasonably authentic representation of social trends. If political, economic, and social information remains readily available without the geographic limitations associated with print media distribution, the confidence level in information associated with this region among government and business interests worldwide will continue to improve. The ability of North African intelligence specialists to take advantage of multiple rather than single-source information to provide analysis to decisionmakers will, however, increasingly have to take account of Arabic-language media as well as independent Internet news sites in order to ensure a more complete picture of events.

IMPROVING CIA ANALYSIS BY OVERCOMING INSTITUTIONAL OBSTACLES

Stephen Marrin

The accuracy of CIA intelligence analysis depends in part upon an individual analyst’s expertise, yet programs implemented to increase this expertise may not be sufficient to increase the accuracy of either an individual’s analysis or the institution’s output as a whole. Improving analytic accuracy by increasing the expertise of the analyst is not easy to achieve. Even if expertise development programs were to result in greater regional expertise, language capability, or an improved application of methodological tools, the production process itself still takes place within an institutional context that sets parameters for this expertise. The agency’s bureaucratic processes and structure can impede an analyst’s acquisition and application of additional expertise, preventing the full realization of the potential inherent in expertise development programs. Therefore, any new reform or program intended to improve analytic accuracy by increasing the expertise of its analysts should be supplemented with complementary reforms to bureaucratic processes — and perhaps even organizational structure — so as to increase the likelihood that individual or institutional improvement will occur.

CIA’s intelligence production may be subject to improvement, but making that a reality requires a sophisticated understanding of how an analyst operates within the Directorate for Intelligence’s [DI’s] institutional context. However, empirical verification of this hypothesis is impossible since, as Jervis notes, “[r]igorous measures of the quality of intelligence are lacking” and are insurmountably difficult to create.

this essay is a conceptual “what-if” exploration into the interplay between the acquisition and application of expertise on three levels: that of the individual, bureaucratic processes, and organizational structure.

THE GOAL: IMPROVING CIA’S ANALYSIS

Improving the accuracy of CIA’s finished intelligence products could make an immediate and direct improvement to national security policymaking as well as reduce the frequency and severity of intelligence failures.

In the author’s experience and observation, a DI analyst interprets the international environment through an information-processing methodology approximating the scientific method to convert raw intelligence data into finished analysis. The traditional “intelligence cycle” describes how an analyst integrates information collected by numerous entities and disseminates this information to policymakers. As William Colby—former Director of Cen- tral Intelligence (DCI) and veteran operations officer—notes, “at the center of the intelligence machine lies the analyst, and he is the fellow to whom all the information goes so that he can review it and think about it and determine what it means.4 Although this model depicts the process in sequential terms, more accurately the analyst is engaged in never-ending conversations with collectors and policymakers over the status of international events and their implications for U.S. policy. As part of this process, intelligence analysts “take the usually fragmentary and inconclusive evidence gathered by the collectors and processors, study it, and write it up in short reports or long studies that meaning- fully synthesize and interpret the findings,” according to intelligence scholar Loch Johnson.5

Intelligence failures of every stripe from the trivial to the vitally important occur every day for a variety of reasons, including the mis-prioritization of collection systems, hasty analysis, and inappropriately applied assumptions.

Administrators at the Joint Military Intelligence College note that “analysis is subject to many pitfalls — biases, stereotypes, mirror-imaging, simplistic thinking, confusion between cause and effect, bureaucratic politics, group-think, and a host of other human failings.”7 Yet most intelligence failures do not lead to direct negative consequences for the U.S. primarily because the stakes of everyday policymaking are not high, and errors in fact and interpretation can be corrected as the iterative process between intelligence and policy develops. As a result, most failures or inaccuracies are eventually corrected and usually never even noticed. However, sometimes intelligence failure is accompanied by either great policymaker surprise or serious negative consequences for U.S. national security, or both.

The CIA’s May 1998 failure to warn American policymakers of India’s intentions to test nuclear weapons is an illustration of both kinds of failure. This lapse — widely criti- cized by foreign policy experts and the press — highlighted intelligence limitations such as the DI’s inability to add together all indications of a possible nuclear test and warn top policymakers. According to New York Times correspondent Tim Weiner, these indicators included “the announced intentions of the new Hindu nationalist government to make nuclear weapons part of its arsenal, the published pronouncements of India’s atomic weapons commissioner, who said…that he was ready to test if political leaders gave the go-ahead, and …missile tests by Pakistan that all but dared New Delhi to respond.”8 CIA’s inability to integrate these indicators — a failure of analysis — led to charges of “lack of critical thinking and analytic rigor.”9 Admiral David Jeremiah—who headed the official investigation into the failure—concluded that intelligence failed to provide warning in part because analysts “had a mindset that said everybody else is going to work like we work,” otherwise known as mirror-imaging.

CIA’s search for ways to improve analytic accuracy and prevent intelligence failure— if successful—could have a positive impact on national security policymaking. The CIA is arguably the centerpiece of the United States’ fourteen-agency intelligence community (IC) and “has primary responsibility for all-source intelligence analysis in the [IC] and the preparation of finished national intelligence for the President and his top policymakers,” according to former CIA Inspector General Fred Hitz.12 If CIA analysis does have this kind of central role in influencing policy, improving its accuracy should provide policy- makers with the opportunity to create or implement policies that more effectively protect national security and advance national interests.

THE METHOD: INCREASING ANALYTIC EXPERTISE

Improving the capabilities and knowledge of the individual CIA analyst through pro- grams reflecting Admiral Jeremiah’s recommendation, is one way to improve the accuracy of intelligence analysis. An analyst’s expertise, defined as “the skill of an expert,” 13 is a crucial component for the production of accurate intelligence analysis. “In the lexicon of US intelligence professionals, ‘analysis’ refers to the interpretation by experts of unevaluated [‘raw’] information collected by the [IC],” according to Loch Johnson.14 The presumption is the more “expert” an analyst is the more accurate the resulting interpretation will be.

Analytic expertise is a multi-faceted concept because CIA uses a complex web of ana- lytic specialties to produce multi-disciplinary analysis. Not hired directly by the CIA or even the DI, most analysts are hired by the individual DI offices and assigned to “groups” that cover specific geographic areas, and are then assigned a functional specialty—”disci- pline” or “occupation” in DI terminology—such as political, military, economic, leader- ship, or scientific, technical, and weapons intelligence, according to CIA’s website.16 An analyst’s expertise can vary depending on his or her relative degree of regional knowl- edge, familiarity with disciplinary theory, and with intelligence methods in general:

■ Regional expertise is essentially area studies: a combination of the geography, history, sociology, and political structures of a defined geographic region. The DI’s regional offices are responsible for an analyst’s regional expertise and develop it by providing access to language training, regional familiarization through university courses, or in- house seminars.

■ Disciplinary expertise relates to the theory and practice that underlies the individual analytic occupations. For example, economic, military, political and leadership analysis are built on a bed of theory derived from the academic disciplines of economics, military science, political science, and political psychology, respectively. Disciplinary expertise can be acquired through previous academic coursework, on-the-job experience, or supplementary training.

For the most part each CIA analyst possesses a very small area of direct responsibility defined by a combination of regional area and discipline as they work in country teams with analysts of other disciplines and interact with other regional or disciplinary special- ists as the need arises. CIA’s small analytic niches create specialists, but their specialties must be re-integrated in order to provide high-level policymakers with a bigger picture that is more accurate and balanced than that arising from the limited perspective or knowledge of the niche analyst. This process of re-integration—known as “coordination” in DI parlance—allows analysts of all kinds to weigh in with their niche expertise on pieces of finished intelligence before they are disseminated. According to CIA analyst Frank Watanabe: “We coordinate to ensure a corporate product and to bring the substantive expertise of others to bear.”

former CIA officer Robert Steele observed that “[t]he average analyst has 2 to 5 years’ experience. They haven’t been to the countries they’re analyzing. They don’t have the language, the historical knowledge, the in-country residence time or the respect of their private-sector peers,” as reported by Tim Weiner.

Increasing expertise may not be sufficient to produce accuracy or prevent failure. As Jervis notes, “experts will [not] necessarily get the right answers. Indeed, the parochialism of those who know all the facts about a particular country that they consider to be unique, but lack the conceptual tools for making sense of much of what they see, is well known.”24 In addition, “[e]ven if the organizational problems…and perceptual impediments to accu- rate perception were remedied or removed, we could not expect an enormous increase in our ability to predict events” because “(t)he impediments to understanding our world are so great that… intelligence will often reach incorrect conclusions.”25 That is because human cognitive limitations require analysts to simplify reality through the analytic process, but reality simplified is no longer reality. As a result, “even experts can be wrong because their expertise is based on rules which are at best blunt approximations of reality. In the end any analytic judgment will be an approximation of the real world and therefore subject to some amount of error”26 and analytic inaccuracies—and sometimes intelligence failure—will be inevitable. Therefore, although increasing expertise is a goal, it cannot be the only goal for increasing DI capabilities.

FIRST INTERVENING FACTOR: BUREAUCRATIC PROCESSES

Bureaucratic processes can impede the acquisition or application of expertise gained in expertise development programs, thereby limiting any potential improvement in overall analytic accuracy. The DI is a bureaucracy, and like every bureaucracy creates “standard operating procedures” (SOPs) which are necessary for efficient functioning but over time usually prove to be rigid in implementation.

Analysts must have the opportunity to apply newly acquired expertise back at their desks in the DI for any improvement to result. However, if analysts do not have the opportunity to apply this expertise, it will likely wither for lack of practice. The DI produces many types of finished intelligence—some reportorial, some analytical, and some estima- tive—to meet policymakers’ varying needs for information. In addition to daily updates on developments worldwide, policymakers and their staffs also use information and analyses when crafting policy and monitoring its implementation. Obstacles to the development of expertise appear when shorter product types are emphasized over longer more research-oriented ones. Short turnaround products—including daily updates known as “current” intelligence—have at times been emphasized over other products for their greater relevancy to policymakers, but this emphasis at the same time has reduced the expertise of the DI writ large because they require different kinds of mental operations that reduce the scope and scale of an analyst’s research and knowledge.

Robert Jervis once observed that “the informal norms and incentives of the intelligence community often form what Charles Perrow has called ‘an error-inducing system.’ That is, interlocking and supporting habits of the community systematically decrease the likelihood that careful and penetrating intelligence analyses will be produced and therefore make errors extremely likely.”30 Bureaucratic processes can contribute to the creation of this kind of “error-inducing system.”

The Swinging Pendulum

For much of the DI’s history, analysts acquired expertise by writing long reports. As former CIA officer Arthur Hulnick notes: “a great deal of research was being done, but …much of it was done to enable the analyst to speak authoritatively on a current issue, rather than for publication.”

In a 1993 article, former analyst Jay Young argued that “the needs of the policy-maker too often get lost in the time-consuming, self-absorbed and corrosive intelligence research production process. The DI’s research program is run with the same rigid attention to production quotas as any five-year plan in the old USSR. … This fixation on numerical production leads managers to keep analysts working away on papers whose relevance may be increasingly questionable. …

In short, too much of the Agency’s longer-term research is untimely, on subjects of marginal importance and chock-full of fuzzy judgments.”

Losing Expertise to Gain Relevance

The swing of the pendulum that emphasizes policymaker relevance over analytic depth causes the DI to produce a large amount of current intelligence that prevents the acquisition and application of analytic expertise. Many analysts are facile data interpreters able to integrate large volumes of new information with existing knowledge, and interpret— based on underlying conceptual constructs — the significance of the new data in terms of its implications for U.S. foreign policymaking. This process provides policymakers with exactly what they are looking for from intelligence analysis. However, if provided with minimal time in which to integrate and process the information, the intelligence analyst by necessity cuts corners. When the DI emphasizes intelligence “on-demand,” analysts meet the much shorter deadlines by reducing the scope and scale of their research as well as sidestepping the more laborious tradecraft procedures by not rigorously scrutinizing assumptions or comparing working hypotheses to competing explanations.

Many times current intelligence analysis consists of a single hypothesis — derived within the first hour of the tasking — that the analyst intuitively believes provides the best explanation for the data. Current intelligence as a product has a lesser chance of being accurate because it lacks the self-conscious rigor that tradecraft entails even though it is the best that can be done under the press of quick deadlines.

The production of a single piece of current intelligence has limited effect on expertise because it draws on the knowledge and tools that an analyst has developed through training and prior analysis. However, if over time the analyst does not have the time to think, learn, or integrate new information with old to create new understandings, knowledge of facts and events may increase but the ability to interpret these events accurately decreases.

Intelligence scholar Robert Folker observed that in his interviews of intelligence analysts a “repeated complaint was the analyst’s lack of time to devote to thoughtful intelligence analysis. In a separate interview at CIA it was revealed that [intelligence analysts]… spend little time or effort conducting analysis.”39 Therefore, the effectiveness of expertise development programs in improving analytic accuracy may depend in part on whether the CIA is able to redress the over- emphasis on short-term analysis.

A Structural Fix?

The DI appears to be remarkably blind to differentiation in both analysis and analysts, perhaps because it assigns tasks to “analysts” and equates the output with “analysis.” As a result, “[w]e do not and never have used the term ‘analysis’ rigorously in the [IC]” according to Robert Bovey, formerly a special assistant to a DCI.41 Robert Jervis illustrated the problems of equating the two in 1988 by arguing that “most political analysis would better be described as political reporting” and that instead of following an analytical approach “the analyst is expected to summarize the recent reports from the field—“cable-gisting.” … [T]he reporting style is not analytical—there are few attempts to dig much beneath the surface of events, to look beyond the next few weeks, to con- sider alternative explanations for the events, or to carefully marshal evidence that could support alternative views.”42 Jervis correctly differentiated the analytical intelligence product from its non-analytic cousin, but failed to distinguish between the analysts best suited for each. The DI — instead of differentiating between analysts — uses a one-size- fits-all recruitment, placement, training, and promotion strategy, and for the most part views analysts as interchangeable.

As a result it has perennially had difficulty creating an appropriate mix of analytic abilities and skills for intelligence production when an issue or crisis develops. In particular, over time the shift in expertise corresponding to a preference for longer papers or shorter more current pieces is especially noticeable.

SECOND INTERVENING FACTOR: ORGANIZATIONAL STRUCTURE

The DI’s organizational structure also influences which kind of analyst expertise is acquired and applied in finished intelligence products. Political scientist Thomas Hammond argues that organizational structure impacts analytic output. He employs information-flow models to demonstrate that—given the same information—one group of intelligence analysts organized by discipline would produce different analytic output than another group organized by region. He also concludes that it is impossible to design a structure that does not impact output.44 If we accept his argument, then organizational structure always affects internal information flows, and likely outputs as well. Theoretically, therefore, an organizational structure divided along disciplinary lines will accentuate learning of political, economic, military, or leadership methodologies through constant, interactive contact and awareness of the projects of other analysts.

From approximately the early 1960s to 1981, the DI was structured primarily by discipline with political, economic, military, and leadership offices each subdivided by geography. In 1981 “(a) (DI)-wide reorganization…shuffled most analysts and created geographically based, or “regional” offices out of the previous organization.”47 According to former CIA officer Arthur Hulnick: “These [new] offices combined political, economic, military, and other kinds of research under “one roof,” thus making more detailed analytic research feasible. After some grumbling, the new offices began to turn out the in- depth products consumers had been seeking, while still providing current intelligence materials.”48 This integration of analytic disciplines in regional offices provided a more productive interpretation of the forces at work within a target country or region but also negatively affected the DI’s ability to maintain disciplinary knowledge.

The distribution of what had previously been centralized knowledge of analytic tools, specialized product formats, and disciplinary theory throughout the DI meant that new leadership analysts were not well trained in their discipline. These new analysts relied solely on the fast-dissipating knowledge of the handful of former LDA officers who happened to be assigned to their team or issue. In addition, actual physical co-location did not occur for months — and in some cases years — due to lack of space in CIA’s overcrowded headquarters building. As a result of being “out of sight, out of mind,” leadership analysts were frequently not informed of ongoing projects, briefings, and meetings, and such incidents had a negative impact on the finished analytical product. When leadership analysts were not included in briefings, the DI risked failing to keep its consumers fully informed of both leadership dynamics and changes within the country or region. In addition, products published and disseminated without coordination at times contained factual errors such as the wrong names or positions for foreign government officials, or distortions in analysis due to the lack of leadership analyst input.

Therefore, the elimination of a leadership analysis-based office resulted in both increased incorporation of leadership analysis and insight into the regional teams’ products, and decreased corporate knowledge and expertise in leadership analysis as a discipline.

PUTTING THE PIECES TOGETHER AGAIN

By the mid-1990s DI analysts were realizing that while multi-disciplinary analysis on country teams made for better integration of disciplines, it also led to the dissipation of disciplinary knowledge. Former LDA analysts’ efforts to sustain their hold on disciplinary knowledge triggered similar efforts by political, economic and military analysts to both sustain and reconstruct occupational-specific knowledge. Without the framework of structural organization to bind each discipline together, over time they had each grown apart.

in 1997 the DI created the “senior-level” Council of Intelligence Occupations (CIOC) with the initial intent of disciplinary “work- force strategic planning…in the areas of recruitment, assignments, and training” as well as “identify[ing] core skills and standards for expertise in the [DI].”

In practice, CIOC became a home for senior analysts interested in learning and teaching their discipline’s methodologies. They “establish[ed] a professional development program for all DI employees that provides explicit proficiency criteria at each level so that everyone can see the knowledge, skills, and experiences required for advancement within each occupation or across occupations.”55 All this was done—according to the CIA website — “so that the DI has the expertise needed to provide value-added all-source analysis to its customers.”

There may be no easy solution to the expertise trade-offs inherent in organizational structure. By definition, the structure will create synergies in the areas emphasized but the opportunity cost of doing so means losing the synergies in other areas.

TO THE FUTURE

Expertise-development programs that create the potential to improve overall analytic accuracy—such as formal training in methodologies, regions, disciplines, or languages, or informal training resulting from greater overseas experience — do not provide much in the way of improvement if in fact the DI’s business practices prevent application of this hard-earned expertise. CIA’s leaders should continue to pursue ways to increase analyst expertise, for they could contribute to increased analytic accuracy. Yet at the same time the DI must adapt its practices to leverage the potential improvements of these programs if the CIA is to provide its policymaking customers with the intelligence they need now and in the future.

APPRAISING BEST PRACTICES IN DEFENSE INTELLIGENCE ANALYSIS

Thomas A. Garin, Lt Col, USAF

ABOUT THE AUTHOR

Lt Col Tom Garin, an Air Force officer assigned to the National Reconnaissance Office (NRO), arrived at the Joint Military Intelligence College in September 1999 to occupy the General Thomas S. Moorman, Jr. Chair for National Reconnaissance Systems. He has taught graduate-level core and elective courses on missiles and space systems, structured techniques for intelligence analysis, and research methodologies, as well as an undergraduate core course on space systems. Prior to this post, Lt Col Garin was Chief, Organizational Development at the NRO. There, he used a balanced scorecard approach to link human resource policies with the overall NRO mission.

An adaptable organization is one that has the capacity for internal change in response to external conditions. To be adaptable, organizations must be able to learn. Organizations can learn by monitoring their environment, relating this information to internal norms, detecting any deviations from these norms, and correcting discrepancies.

A telling aspect of any knowledge organization is the way in which it manages its information.

Knowledge management can equip intelligence organizations for the fast-paced, high- technology information age. By building upon the best aspects of total quality management, and observing the criteria for performance excellence adopted for the Baldrige National Quality Program, managers can theoretically lead professionals to work together effectively as a group in a situation in which their dealings with one another affect their common welfare.

STUDY DESIGN

The DIA is expected to ensure the satisfaction of the full range of foreign military and military-related intelligence requirements of defense organizations, UN Coalition Forces, and non-defense consumers, as appropriate. Specifically, DIA supports: (1) joint military operations in peacetime, crisis, contingency, and combat; (2) service weapons acquisition; and (3) defense policymaking. In the past fifteen years, several analysis and production units within DIA have received prestigious awards demonstrating the confidence and appreciation that senior U.S. leaders have in the agency’s performance. On the basis of these observations, a case could be made that DIA has become a world-class organization for meeting U.S. military intelligence requirements. Clearly, the DIA is a worthy organization to examine for evidence of how quality intelligence analysis can be carried out.

The three essential characteristics of operations research are (1) systems orientation, (2) use of interdisciplinary teams, and (3) adaptation of the scientific method. The Baldrige criteria provide a similar function for assessing information organizations. From a systems perspective, the Baldrige criteria focus on a leadership triad and a results triad.

A Typical Office

A typical analysis office consists of a manager, the senior intelligence officer (SIO), analysts, liaison officers, and administration/technical support staff. The manager handles a set of processes common to many organizations such as planning, organizing, staffing, directing, coordinating, reporting, and budgeting. The SIO is a senior analyst, a subject matter expert on a grand scale, and is responsible for the content of the analytical product. A typical SIO approves all finished intelligence products, coordinates budget to contract external studies, and is the unit’s chief training officer. Analysts tend to be subject matter experts and are accustomed to using technology to support analysis. Liaison officers connect the analysis office either to operational forces in the field or to other organizations in the Intelligence Community. The administration/technical support staff provide a variety of functions such as disseminating the finished intelligence product, providing graphic support, and arranging travel.

Consumer Groups

Analysts produce intelligence for a number of different consumer groups.2 These consumer groups include: (1) civilian and military policymakers; (2) military planners and executors; (3) acquisition personnel responsible for force modernization; (4) diplomatic operators; and (5) intelligence operators.

Participants

This intelligence analysis benchmarking study is the result of a class project for the Joint Military Intelligence College’s graduate-level course ANA 635, “Structured Techniques for Intelligence Analysts.” This course helps students understand and apply descriptive and quantitative methodologies to intelligence analysis.

Process

The team followed established benchmarking procedures in conducting this study, which included on-site visits and interviews with members of each unit. The study team adopted a “systems” perspective that isolated the concepts of leadership, strategic planning, consumer focus, information and analysis, people and resources, process management, and mission accomplishment. Taken together, these seven elements define an organization (or a unit within a larger organization), its operations, and its performance.

THE EVALUATION CRITERIA

The Malcolm Baldrige National Quality Improvement Act of 1987, Public Law 100- 107, provides for the establishment and conduct of a national quality improvement pro- gram under which awards are given to organizations that practice effective quality man- agement. The Act established the Malcolm Baldrige National Quality Award. In implementing this Act, the National Institute of Standards and Technology makes avail- able guidelines and criteria designed for use by business, education and health care orga- nizations for self-evaluation and improvement of their competitive positions.3 In this paper, the NIST criteria are applied to DIA, under the premise that a U.S. government organization, through its component offices, could benefit from a systematic evaluation of its “competitive” position; namely, its ability to develop and maintain agility in a change- intensive environment.

Task Assignment

Initiative. In many offices, analysis is self-generated. The office leader assigns each analyst to an area of the world and constantly pushes for finished products. The analyst is expected to act on his own initiative, do the analysis, and produce articles. In this scheme, there really is no clear top-down tasking process. One subject observed that leaders need to do a better job of prioritizing analysts’ work because, even as analysts assemble data to put in databases, create briefings and produce finished intelligence products, they often write on relatively insignificant items. Their workload is usually events-driven. Consum- ers call in to their office and request certain information. The analysts attempt to tell the consumer what they don’t already know. Their work may not always reflect the consumer’s interests. To counter this tendency, some subjects maintain, managers need to incorporate a perspective on vital national interests into their analysts’ tasks.

Another subject identified two ways in which analysis tasks are self-generated. In one, the analyst submits a formal proposal to do general research and, if approved, then does the work. In the second, the analyst submits a more structured proposal to study a region of the world by looking forward perhaps about 10 years. If approved, the analyst in this case identifies problems, examines them, and synthesizes them into most important issues. A problem and its attendant issues could become the basis for a National Intelligence Estimate (NIE).4

The self-generated analysis may have certain implications. First, the analysts may be doing analysis that serves the personal interest of the analyst, but does not meet a military leader’s or other policymaker’s requirements. Second, the analyst may be focusing attention on analysis projects that are the easiest or quickest to publish, satisfying a need of the analysis office, rather than producing analysis that is most beneficial to the Intelligence Community. Third, the analyst may focus on easier problems that reside in the short-term instead of tougher future-oriented problems in order to give the appearance of being more productive by producing more intelligence.

Leader and Analyst Interaction

Initiative. Most interview subjects agreed that leaders from inside or outside their office do not interfere with the analyst’s work. They provide oversight only when the ana- lyst asks for it because analysts do not like to be micro-managed. Each manager considers his or her analysts subject matter experts. They may ask the analyst, “are you finished yet?” or “do you need help on A/B/C?” For additional support, analysts often appeal to other experts as needed.

The SIO for an office gets involved only if an analyst brings an issue or question to him. Sometimes, the SIO walks around the office, talks to the analysts, and looks for ways to help obtain certain information. The SIO gets formally involved in the analytic process, however, when the analyst has created the final product. Usually, it has been coordinated through the analyst’s peers before it gets to the SIO. The SIO makes any substantive refor- matting corrections before the final product can leave the office.

Directive. Usually during a crisis situation, there is face-to-face interaction between an office leader and analysts from the moment the requirement appears until it is met. One subject agreed that there should be a constant exchange of information between the leader and the analyst even during routine analysis. Some leaders are better at it than others. The lower level managers in their office worked directly with the analysts.

Usually, they assign an analyst a project, discuss with them how to do it, and continually ask them how they are doing. Another subject described the production process this way: the analyst produces, the boss does a sanity-check review; the analyst coordinates his or her work with other staffs, and they suggest changes.

Feedback

Initiative. The interaction between leadership and the analyst during feedback sessions varies from office to office. It usually follows a similar pattern: the analyst produces his or her findings, managers review the work; analysts make the recommended changes, and analysts coordinate the work with appropriate offices. In some cases, the analyst decides who should coordinate on his or her analysis. Competition between offices impedes the coordination process. One subject regrets that neither top-down guidance nor tools exist to support the coordination process.

One subject said all input from leaders and peers comes after the analysis is done. As a result, the usual case is for the assessment in the final product to be conceptually close to the analyst’s own findings. Once the feedback process concludes, analysts are free to post their finished products on the electronic “broadcast” media.

An analyst may also send a copy directly to the consumer, or may inform specific consumers of its URL. As feasible, the product will be “downgraded” to a lower classification level by obscuring sources or methods to enable it to reach a wider audience on different electronic systems.

Directive. After completing the analysis, the analyst will normally present his or her information to a team chief or SIO for coordination. After considering the revisions, the SIO will give the analyst permission to send written findings forward. Because this information may appear unaltered in the U.S. President’s read book, an SIO does not hesitate to “wordsmith. This feedback may be frustrating and humiliating for the analyst. The SIO may direct the analyst to fix the logical development of an argument or to present more and better data. Some analysts will simply rework their product and resubmit it to the SIO. Others, however, will discuss the comments with the SIO. The SIO typically prefers the latter method because the analyst will usually understand the comments better and will learn more from the experience. Almost all of the products go to their consumers; only rarely will a product be killed within the analyst’s office.

Comments

Good leadership demands high quality products to inform warfighters and policymak- ers. Leaders need analysis to answer tough questions. At the same time, managers need to keep in mind important questions about the analysis process itself:

  • Are military leaders able to do their job better because of their analyst’s product?
  • Are the benefits of doing the analysis outweighing the costs?
  • Are we doing risk analysis? That is, are we attempting to understand the balance between threat and vulnerabilities by examining local trade-offs?

Good leadership ensures a value-added coordination process. Analysts often find the coordination process tough. One subject said that before the first draft, the analyst’s job was the best job. After the first draft, however, the analyst’s job was difficult. Another subject called into question the coordination process because of the low quality of finished products posted in the database. Another subject said the coordination process is a hum- bling experience for the analyst because it is tough to get back your paper marked up with corrections. One subject said a co-worker tested the system. He worked really hard on one project and sent it forward. Next time, he slapped something together and sent it forward.

He noticed that it didn’t make any difference either way. He was devastated. There was no motivation to excel. From these testimonials, one could conclude that unsystematized (non-standard) coordination may hinder the work of analysts.

Good leadership theoretically removes such barriers to effective use of analytical resources by providing leading-edge administrative and computer support; quick-turn- around graphics support for urgent tasks; and hassle-free travel support. One subject com- pared DIA administrative support unfavorably with what he grew to expect when he worked for the Central Intelligence Agency. He could drop something off at 5PM for graphics support and pick it up when he got to work in the morning. When he traveled, his proactive travel office would remove some of the hassles associated with traveling. He would tell someone in the travel office where he needed to go. They would call him back a short time later with all the information he needed to know about the trip. Then, when he got back from his trip, he told them what he did and they reimbursed him. There were no required travel reports and no receipts. In fact, they told him to get rid of all his receipts. In case he was caught; it would be better for no one else to know where he had been!

Summary

Leadership is a critical aspect in determining the success of the intelligence analysis process. Leadership demands high-quality products, ensures a value-added coordination process, removes technical or administrative barriers, and intervenes when needed. Some leaders assign tasks directly to the analysts. Others rely on the Senior Intelligence Officer (SIO) to assign tasks or rely on the initiative of the analysts themselves to seek out the right problems and analyze them. Leadership tends to provide little or no interaction dur- ing the analysis process except if and when the analyst seeks it. Analysts do not like to be micro-managed. In a crisis situation however, the leader takes a more direct and hands-on approach. Although feedback varied from office to office, ideally the analyst should seek short, regular feedback on an informal basis with the office leader.

STRATEGIC PLANNING

Baldrige criteria compel an examination of how an analysis office prepares for future challenges, records its institutional history, and uses academic or professional sources. Within the category of future challenges, a distinction may be made between capability and issue challenges. Future capability challenges deal with the people and process issues. Future issue challenges, on the other hand, are country- or target-dependent. These challenges deal specifically with the emerging threat to U.S. interests. Although none of the offices has a written, formal strategic plan, in most, leaders claimed to have thought about strategic planning issues and were working toward a future vision. Their daily mis- sion though was to “get the right information to the right people at the right time.”

Future Challenges

Capability Challenges. Organizational capability centers on people and processes. For those subjects who indicated that they have thought about the future in strategic plan- ning terms, leaders recognized that the commitment of their people to an organizational ideal is a necessary ingredient for success. In practice, then, organizational capabilities must support strategic planning. Analysis organizations need to consider their staff capabilities and ensure that staffs have the necessary knowledge, skills, and tools for success. Our study subjects generally favored a team approach for intelligence analysis. Leaders wanted analysts to consult with experts and coordinate with stakeholders before sending the final product to consumers.

Since none of the subjects claimed to use a formal strategic planning process, our study team could not discuss the process with them in detail. A generic strategic planning process, however, should include (1) doing advanced planning, (2) performing an environmental scan based on external and internal information, (3) setting a strategic direction using the vision statement, (4) translating the strategic direction into action via an implementation plan, and (5) making a performance evaluation. A performance evaluation or assessment should include (1) defining goals, (2) describing success, (3) setting targets, (4) measuring performance, and (5) adjusting goals based on performance. The DIA has a formal process to do strategic planning, but it was not clear to the study team how the organizational strategic planning process might be linked to the lower levels of the organization.

Institutional History

Most subjects said they did not have a written institutional history. There exist no for- mal analyses of their office’s strengths or weaknesses. The office does rely, then, on retaining experienced individuals for its corporate memory.6 Some offices maintain briefings about their office’s operations as a surrogate for a written institutional history. Anyone assigned to an analytic unit is expected to set about building connections and learning operations and systems. Only one subject said they had formed a history office to record significant events for their office, beginning in the late 1990s. Even this admirable initiative, however, with its focus on events, rather that process itself, would fail to capture the nuances of truly useful and recoverable institutional history.7

Academic or Professional Sources

In the interviews, office leaders tended to support academic training, attendance at professional conferences and conventions as experiences useful to the development of professional expertise. However, there exist at the office level no guides or plans for such professionalization opportunities.

“Factions,” an analytical tool developed at CIA for systematic analysis of political instability, is a good example of research and development (R&D) work done in collaboration with members of the academic community.10 In the 1980s there was no particular pressure or encouragement across the Community to use structured analytic methods. Analysts gathered the facts, looked at them in their uniquely “logical” way, and came to some conclusions. Some DIA analysts use structured techniques to do intelligence analysis. The subjects we spoke to, however, described their analytical process as gathering facts, examining them logically and drawing some conclusions. More emphasis on structured techniques at DIA may be appropriate.

Summary

Although none of our subjects had a written, formal strategic plan, most of them had thought about strategic planning issues and were working toward a future common vision among the personnel in their office. Office leadership is often engaged with senior leader- ship about future issues. Managers know the direction they want to go in and recognize the need to support a strategic plan. Furthermore, a focus on important future issues such as which country is most problematic clearly makes an analysis shop proactive and saves time in crisis situations.

Theoretically, the strategic management process includes (1) advanced planning; (2) an environmental scan that includes a review of internal and external information; (3) setting a strategic direction; (4) translating the strategic direction into action; and (5) evaluating performance. Consumers, stakeholders, and subjects influence the process by stating their requirements, receiving products and services, and providing feedback. Information and analysis influence each step of the process by providing management with important information, data, and feedback.

Most of our subjects relied on the corporate memory of senior analysts instead of maintaining a recorded institutional history. As senior analysts approach retirement age and leave government service, analysis offices will lose a great deal of expertise. Some offices maintained a written office history or had on file significant briefings about their office’s operations. All of our subjects used academic or professional sources to develop their capabilities as needed. They also knew what commercial tools were available to help them to do more for their consumers.

CONSUMER FOCUS

The Baldrige criteria applicable to a consumer focus suit the DIA case very well. As an information organization, this agency is obligated, through its analysis units, to discern consumers’ requirements. A key question for this study is whether relationships with consumers tend to be ad hoc or the result of a deliberate strategy.

Consumer Requirements

Consumers make their requirements known through a formal production database, telephone calls, individual initiatives, by e-mail, and through personal visits. They make their initial requirements known primarily on an ad hoc basis. The most meaningful requirements are informal because the analyst gets to talk directly to the consumer. Subsequently, the analyst can help the consumer refine the formal request in the Community’s intelligence production database. Some consumers make telephone calls to analysts that require an answer in two hours or less. This can make the analyst’s job very stressful. Consumers use emails, on the other hand, as a secondary contact medium and mostly for follow-up activities. On occasion, a consumer will come to the analyst’s office to maximize the opportunity to gain a very specific pre-deployment, contextual understanding of target information. One subject noted that although most analytic tasks are management-driven, there are also many self-generated products. The latter could have a positive effect, especially in promoting analyst self-esteem, so long as the self-generated products are of interest to the entire Community.

Consumer and Analyst Interaction

The different frames of reference that one might expect to characterize any interaction between analysts and consumers are on vivid display in anecdotes related by most inter- view subjects. Much of the time, consumers appear to shy away from direct interaction with an analyst. On occasion, the consumer’s rank is too far above that of the analyst to allow an analyst to directly approach an individual consumer, especially in an organization like DIA, often still characterized by a culture that finds meaning in and derives behavioral clues from a hierarchy of authority. Occasionally, however, informal interaction does occur, to include face-to-face conversations or telephone calls to enhance clarity of requirements. One subject noted that his contact with consumers is mainly personal and direct. This analyst works with consumers to build training products that look as much as possible like the real world. Toward that end, an analyst might attend survival school as a student, rather than as a VIP, in order to absorb and then re-create more realistically some part of that training environment.

Production tasking usually passes through bureaucratic levels at both the producer and consumer ends, which reinforces pre-existing tendencies keeping analysts and consumers away from each other. Suddenly, an answer to a formal or infor- mal request simply appears. A general absence of feedback from consumers characterizes the intelligence production process in the offices included in this study. Any interaction with consumers that does occur happens on an ad hoc basis.

Because most products are published electronically on systems such as Intelink, it is possible for the agency to track in considerable detail the organizations and even particular analysts that access and explore particular documents. In practice, however, this is rarely done at the level of detail that allows an analyst to be certain who the consumer really may be. Even if one accepts the concept that within the Department of Defense, information becomes actionable intelligence mainly through the briefing process, it is wise to remember that cognitive processes do depend ultimately on purposeful information-seeking activity, which can hypothetically be tracked in such detail as to inform intelligence producers exactly who their true consumers really are.

Feedback

Analysts provide finished products to their consumers in a variety of ways: through the formal production requirements management database, in hard copy, posted to a website, or by email. Sometimes, depending on the size of the electronic file, analysts post a prod- uct on a “hidden” URL. Then, they call a known consumer to let them know where they have posted the product. Once the consumer has the product, the analyst can delete it from the system. Feedback appears in many forms. In one office, informal feedback pre- dominates through personal contact with the consumers. Another office obtains consumer feedback either via the office web site’s “feedback” button, or via a verbal response from the consumer that the product met or did not meet the need. Hard copies automatically include a publication office’s formal feedback form. All products do not get this form. One subject said a team in their division uses a questionnaire to find out if their consum- ers liked their product. One office maintains a sizable travel budget so analysts may travel to their consumer’s office regularly.

Marketing a product can attain great importance. An analyst typically does not want his or her finished product to sit on the table in the J-2’s office. The usual desire is for the information to reach much lower levels, where the material remains actionable information, which may only later become actionable intelligence. They do this by using different means of communication.

The analyst who writes the product can market it by mailing hard copies, by telling people where to find it on Intelink or by responding to specific requests by sending email attachments or making a personal phone call.

Summary

Building relationships with consumers tends to be more ad hoc than the result of a deliberate strategy. Although consumer feedback varies from office to office, most of the subjects said that “working-level” consumers support their intelligence analysis process. Some got feedback from an office web site.

INFORMATION AND ANALYSIS

Information and analysis criteria address the question of how well DIA offices are meeting challenges in this core functional area, and reveal whether a meaningful approach is being taken by the organization for measuring product quality. Representatives of most of the analysis offices indicated that they are meeting the requirements embodied in these criteria. Informal means for measuring analysis performance included counting products, meeting production schedules, regularly updating databases, answering key consumer-oriented questions, and gauging performance based on “perception.’’

Meeting Current Analytic Challenges

Specific indicators of success that were quickly asserted by interview subjects included the following:

  1. Positive comments or lack of negative comments from consumers;
  2. Evidence of consumer use of intelligence products;
  3. Knowledge of the number of terrorist attacks averted;
  4. A downward trend in the number of requests for information (RFI);
  5. Evidence of improvement of analysts’ capabilities based on accumulated experience.

Not surprisingly, a tendency exists for a dissatisfied consumer to be more vocal. A lack of negative comments from consumers is therefore considered a positive indicator of success. Consumers never want DIA to stop sending them daily intelligence products. Even if routine DIA assessments are only placeholders, the consumer still expresses a demand for the products because one article could change everything.

Measurement of actionability is never easy because often no recordable action is required, and the producer faces the difficult task of determining whether the product has simply influenced the way a consumer views an important issue. No one from among the offices under study here claims to have an answer to this difficult issue.

A different type of indicator of success is the level of improvement in analyst savvy, given the accumulation of experience. An office’s SIO is in a good position to determine whether the analyst is making improvement. Comments from the SIOs contacted in the course of this study indicate that they examine the analyst’s work with a critical eye to determine whether the analyst is growing in the job. If it is determined that the analyst is in fact getting better at doing analysis, then the SIO rewards the analyst with more work and more challenging jobs. The SIO also rewards the analysts with an appropriate performance review. If however, the analyst does not show improvement in the current task, then the SIO working with the analyst and manager will attempt to diagnose the problem and provide appropriate training to improve proficiency in doing analysis.

A Standard Approach to Measuring Quality

No subject claimed to have plans to implement any kind of standard approach to mea- sure analytic quality. One subject said he was not opposed to some standard for measuring quality. He has not, however, seen a useful measure for it. In another office, analysts “just publish in order to keep their database filled.” Little or no attention is given to quality. Either the product meets the requirement or it doesn’t.

Comments

All of the subjects said they had no formal performance measurements, but neither did they employ informal means. To justify this situation, one subject said intelligence analysis is purely subjective. There are no numbers to count. If analysts meet their production schedule, then they are considered to be successful. If analysts get all of their work done, then they have met the minimum success requirement.

Managers of intelligence analysis are ambivalent about managing for results. Though interested in demonstrating success, they are uneasy about being held accountable for poor organizational performance. There are severe problems in measuring performance in practice: (1) problems in getting agreement on what good performance means for intelligence analysis; (2) intelligence analysis goals are unreasonable compared with available resources; (3) key performance indicators are unavailable or too costly to collect; and (4) management is unwilling to make necessary changes to improve intelligence analysis.

Office leaders appear to be most interested in how many products the analyst finished, even though this crude measure is not supplemented with any method for independently weighting the relative value of those products.

One subject noted that his office does track certain indicators of quality, such as whether the consumer liked the product. A subsidiary indicator lies in evidence of whether the consumer responded to the product. Did anyone else want a copy of the product?

One subject prefers using the term “gauge” instead of “measure” because “measure” assumes some sort of reasoned technique. Often, there is a bit of a “gut check” in any office with respect to how well managers perceive their office to be meeting its mission. Most of the measurement comes in the form of ad hoc consumer feedback. A well-executed mission, like a successful noncombatant evacuation operation (NEO) for instance, tells the office that they are doing a good job.

Summary

Informal means for measuring analysis performance included counting products, meeting production schedule, updating databases, answering key consumer-oriented questions, and gauging performance based on perception. Two common ways for analysts to know if they are meeting their consumer’s needs are by a lack of negative feedback and by an increase in product use. Our subjects do not keep track of forecast accuracy because it is too difficult to track. Instead analysts individually keep a mental checklist on any forecasts they make and how the situation developed over time. Although none of our subjects planned to implement any standard approach to measure analytic quality in the near future, they continue to examine ways to do analysis more effectively without destroying their organization’s “can do” ethos.

PEOPLE AND RESOURCES

The people and resources category offers criteria to examine the knowledge, skills, and experiences of the analysts and the availability of analysts’ tools. Leaders can improve the quality of a product by promoting greater expertise among the intelligence analysts and by inducing analysts to take advantage of information found in open sources. Study subjects said they need analysts who think across interdisciplinary boundaries and they prefer analysts who have military operations experience.

Knowledge, Skills, and Experience

According to representatives of the five offices in DIA, analysts bring with them or develop certain skills to help them perform their analysis and at the same time they must develop an awareness of how certain analytic pitfalls may influence their work. The ana- lytic skills include: (1) technical or topical expertise; (2) knowledge of the target, sources, and analytic techniques; (3) ability to search and organize data; (4) ability to synthesize data or use inductive reasoning; and (5) ability to express ideas. The analytic pitfalls include: (1) personal biases or inexperience; (2) lack of data or access to data; (3) asking the wrong question;14 (4) misunderstanding data; (5) flaws in logic; (6) no review or evaluation process; (7) denial and deception by the target; (8) politicization or non-objectivity; (9) groupthink; and (10) mirror imaging.

One problem frequently voiced is that new hires top out in four years at the GS-13 level, then they have to move out of the office to continue advancing in their career. A consensus exists among office managers that any office struggles to maintain their analysis capability as people move to better jobs for promotion. Having people who are career intelligence officers, especially if they also possess a military operations background, is commonly cited as a key to a quality staff.

PROCESS MANAGEMENT

Process management criteria apply to information handling, intelligence analysis, and collaboration. Processes associated with the dissemination of information and intelligence are beyond the scope of this study, but in this area, the advent of online publication and the loosening of “need to know” rules governing access to sensitive information have relieved some classic dissemination bottlenecks.15 Good analysts are commonly charac- terized by DIA officials as “focused freethinkers.” Other commonly acknowledged max- ims are that the analysis process is a highly competitive process, and that analysts rush to be first to publish vital intelligence findings. At the same time, analysts must work together by collaborating with other analysts to ensure accuracy.

None of our subjects said they observed a formal or structured process in doing intelligence analysis. One subject summed up this approach as simply “sitting down and going to work.” A second subject claimed to use an informal and semi-structured, term research process. In this mode, the analyst first has an idea related to their “account.” They discuss with their team chief the problem they are interested in solving. If they get permission, then they do the research. Another subject added that, likewise, he does not have a “hard- wired” approach, yet applies a routine to the process. One subject said his office has a process that only remains in the mind of each of the office’s analysts. It is, in the estimation of this subject, an application of the scientific method or at least a disciplined thought process. In this scheme, managers give the analyst a problem, and the analyst does research to determine what they have and what they need. If there is time, he or she will request collection of certain data. The analyst reviews the data and answers the question.

Information Handling

Analysts do their own fieldwork to gather any information beyond that from routinely available, written sources. One SIO emphasized that his analysts “gather” information, they do not “collect” it. For example, they do not have diplomatic immunity when they are visiting other countries; they must get specific permission from the country to be there.

Analysts report that they try to lay down the steps of their analysis in a logical way. For example, they use several paths to move from general information to more specific infor- mation. Some analysts write their papers as if they were doing research in a graduate school. Others write products that look like newspaper articles. Analysts tend to protect their sources in a manner similar to the way newspaper reporters protect the identity of their sources.

Intelligence Analysis Process

The analysis process includes (1) problem identification; (2) data gathering; (3) analy- sis, synthesis, collaboration, and examination of alternative propositions using all sources of data; and (4) direct support to consumers with briefings and publications. In the end, analysts make an assessment of what the data probably mean or what the data could mean. General Colin Powell (USA, Ret.) sums up his expectations as a consumer of intelligence in this way: “Tell me what you know, what you don’t know, and what you think— in that order.” In this oft-quoted dictum, he suggests that analysts can best convey what is known about the problem, evaluate the completeness of their knowledge, and interpret its meaning. It remains unclear whether the same guidance is as suitable for written products as for oral briefings. In the course of analysis, whether presented in written or oral format, it is important for the analyst to answer for himself certain key questions:

  1. What do you know about the issue?
    2. What is a logical, sequential way to present the facts?
    3. What conclusions or implications can be drawn from these facts?

Analysts generally find it distasteful to coordinate their products with members outside of their team. Nonetheless, when the “corporate” analysis process works as it can, all of the teams contribute to the analyst’s final product and, as a result, the analyst produces a superior product. Managers do want analysts to exchange information with each other. They have a distaste for competition among analysts. The analysts’ rationale for resisting widespread coordination is that they consider themselves subject matter experts.

Collaboration and Competition

Collaboration. If analysts choose not to collaborate with other Community experts, then the SIO may force them to do so during the coordination and review process. Manag- ers want analysts to verify the validity of all of their information in a manner similar to the way that they feel compelled to check and verify human-resource information.

Competition. Collaboration among intelligence analysts is always a problem because analysts, like many school children, tend to be competitive with each other. The acknowl- edged lack of coordination within DIA is not a new or exceptional phenomenon. New analysts may not know who to talk to on a particular subject or where to go to get useful information. A more senior mentor or manager can usually provide the analysts with that kind of information, but the exchange of information between analysts doesn’t normally happen in the unsupervised work environment. Analysts are very competitive within the office and especially with other agencies in the IC. There is a strong desire to publish first, a tendency that inhibits full disclosure to competing analysts.

Summary

Good analysts are “focused freethinkers.” Nevertheless, “out-of-the-box”-type thinking needs to be tempered by using an informal, disciplined thought process and collaborating with other analysts The scientific method is a streamlined process that ensures the analyst doesn’t wander too far

Definition of Success

All of our subjects agreed that success is a difficult concept to define. Our subjects did, however, agree that it might be possible. First, success is positive feedback from consum- ers. If they do not provide explicit positive feedback, then the analysis shop can look for indirect indicators.

Second, an intelligence analysis success occurs when a unit that is known to depend on DIA products completes its military operation successfully.

Third, success is obtaining an expanding budget and a growing workforce. These developments indicate that the topic focus is perceived as interesting by agency leader- ship. One subject labeled this type of success as “bureaucratic success.”

Fourth, success to some analysts means that evidence indicates that they are able to look at things in the open sources and link them in ways others could not envision. The analyst would observe a situation, notice a trend, and take a position on a tough issue.

One possible way to measure success is to count the number of electronic hits on a particular document on a database. If consumers are asking questions, then analysts can feel they are being helpful. An informal summary metric, then, is the measure of information flow out of the office.

Summary

Intelligence analysis supports DoD organizations and warfighter operations. Mission success is positive feedback from consumers or a successful completion of military operations. Bureaucratic success, on the other hand, results in more money to spend and more analysts to do the job. Analysts consider themselves successful if their consumers changed their behavior based on the information they provided to them.

STRATEGIES FOR SUCCESSFUL INTELLIGENCE ANALYSIS

“What is good intelligence analysis?”

Good or useful intelligence analysis may be best defined by its opposite. Bad intelligence does not specify what the threat is or how it will manifest itself. For example, the analyst may conclude there will be a 50 percent chance of a chemical weapons attack against the U.S. in the next 10 years. That conclusion is not meaningful or helpful to planners and users of intelligence.17 In another example, the analyst may conclude that all ports are dangerous to U.S. ships. This analysis is impractical because ships have to use ports occasionally.

In government work, the applied nature of intelligence production would seem to offer the opportunity to develop and apply consistent measures of productivity and quality, just because the products are indeed used by specific consumers. If this is true, then there is room to bring “operations research” into play as well as the Baldrige criteria, to encour- age the development and collection of carefully codified, surrogate measures of productivity and quality. Successful use of these tools does not remove from the manager the task of decisionmaking but rather it requires of him or her rather different kinds of decisions. In other words, using operations research tools provides managers with extra insight into their particular subject and hence leads them into much more difficult but fruitful areas. Consultants with scholarly backgrounds could come into the intelligence analysis work environment and make vast improvement to the intelligence analysis capability using such tools to assist them, in the author’s opinion.

CORE COMPETENCIES FOR INTELLIGENCE ANALYSIS AT THE NATIONAL SECURITY AGENCY David T. Moore and Lisa Krizan

Seekers of Wisdom first need sound intelligence.

— Heraclitus1

What makes an intelligence analyst successful in the profession? This question strikes at the heart of the National Foreign Intelligence Community’s mission to provide actionable information to national leaders and decisionmakers.

What is a qualified intelligence analyst?

In this paper the authors propose a set of functional core competencies for intelligence analysis, shown in the figure below, which provides a starting point for answering fundamental questions about the nature of ideal intelligence professionals, and how analysts who share these ideals can go about doing their work. Keeping in mind the complex nature of the threats to U.S. national security, we argue that the strategy for deploying intelligence analysts and for carrying out intelligence production must become more rigorous to keep pace with 21st Century foes, and to defeat them.

Functional core competencies for intelligence analysis

The authors began exploring the art and science of intelligence analysis at their agency as part of a corporate initiative to add rigor to its analytic practice.

Sherman Kent, who helped shape the national peacetime intelligence community, argues that intelligence requires its own literature. According to Kent, a key purpose of this literature is to advance the discipline of intelligence. Kent believed “[as] long as this discipline lacks a literature, its method, its vocabulary, its body of doctrine, and even its fundamental theory run the risk of never reaching full maturity.”6 Through the publication of articles on analysis and subsequent discussion, “original synthesis of all that has gone before” occurs.7 In keeping with Kent’s mandate to develop an intelligence literature that provokes discussion and further methodological development, we seek comment and fur- ther discussion among scholars of intelligence studies.

DEFINITIONS AND CONTEXT

Intelligence refers to information that meets the stated or understood needs of policymakers…. All intelligence is information; not all information is intelligence.

— Mark Lowenthal8

Intelligence is timely, actionable information that helps policymakers, decisionmakers, and military leaders perform their national security functions. The intelligence business itself depends on professional competencies, what John Gannon, former Chairman of the National Intelligence Council, refers to as “skills and expertise.” He notes that “this means people—people in whom we will need to invest more to deal with the array of complex challenges we face over the next generation.”

Ultimately, analysis leads to synthesis and effective persuasion, or, less pointedly, estimation.10 It does so by breaking down large problems into a number of smaller ones, involving “close examination of related items of information to determine the extent to which they confirm, supplement, or contradict each other and thus to establish probabilities and relationships.”11

Since the advent of the Information Age, “[collecting] information is less of a problem and verifying is more of one.”12 Thus the role of analysis becomes more vital as the supply of information available to consumers from every type of source, proven and unproven, multiplies exponentially. Intelligence analysts are more than merely another information source, more than collectors and couriers of information to consumers. Further,

[the] images that are sometimes evoked of policymakers surfing the Net themselves, in direct touch with their own information sources, are very misleading. Most of the time, as [policymakers’] access to information multiplies, their need for processing, if not analysis, will go up. If collection is easier, selection will be harder.13

At its best, the results of intelligence analysis provide just the right information permit- ting national leaders “to make wise decisions—all presented with accuracy, timeliness, and clarity.”14 The intelligence provided must “contain hard-hitting, focused analysis relevant to current policy issues….Therefore, analysis of raw information has the most impact on the decisionmaker and [therefore] producing high-quality analytical product should be the highest priority for intelligence agencies.”

Treverton adds that intelligence must anticipate the needs of policy. “By the time policy knows what it needs to know, it is usually too late for intelligence to respond by developing new sources or cranking up its analytic capacity.”

A former policymaker himself, he asserts that intelligence is useful to policy at three stages during the life of an issue:

  • If the policymakers are prescient, when the issue is just beginning; however there is likely to be little intelligence on the issue at that point.
  • When the issue is “ripe for decision.” Here policymakers want intelligence that permits alternatives to be considered; however, intelligence often is only able to provide back- ground information necessary for understanding the issue.
  • When the policymakers have made up their minds on the issue, but only if intelligence supports their view. They will be uninterested or even hostile when it does not support their view.21

These limitations notwithstanding, Treverton suggests that policymakers can and should establish a symbiotic relationship with the intelligence analysts who advise them:

[If] you call them in, face to face, they will understand how much you know, and you’ll have a chance to calibrate them. You’ll learn more in fifteen minutes than you’d have imagined. And you’ll also begin to target those analysts to your concerns and your sense of the issue.22

Similarly, the analyst has responsibilities to the policymaker. In commenting on this rela- tionship, Sherman Kent asserts

[intelligence] performs a service function. Its job is to see that the doers are generally well informed; its job is to stand behind them with a book opened at the right page to call their attention to the stubborn fact they may be neglecting, and—at their request—to analyze alternative courses without indicating choice.23

In Kent’s view, the intelligence analyst is required to ensure, tenaciously, that policy- makers view those “right” pages, even when they may not wish to do so.

MEASURING SUCCESS IN INTELLIGENCE ANALYSIS

Intelligence must be measured to be valued, so let us take the initiative and ask our management, [and] the users, to evaluate us and our products.

— Jan P. Herring24

Any observer can expect that a successful intelligence analyst will have certain personal characteristics that tend to foster dedication to the work and quality of results.

Intelligence Process

Successful intelligence analysis is a holistic process involving both “art” and “science.” Intuitive abilities, inherent aptitudes, rigorously applied skills, and acquired knowledge together enable analysts to work problems in a multidimensional manner, thereby avoiding the pitfalls of both scientism and adventurism. The former occurs when scientific methodology is excessively relied upon to reveal the “truth”; the latter occurs when “inspiration [is] unsupported by rigorous analysis.”26

A vital contributor to the analytic process is a spirit of competition, both within an intelligence-producing agency and especially between intelligence agencies. There is a tendency for analysts working together to develop a common mindset. This trap occurs typically when analysts fail to question their assumptions about their role in the intelligence process and about the target. The Council on Foreign Relations’ independent task force on the future of U.S. intelligence recommends that “competitive or redundant analy- sis be encouraged” precisely for these reasons.27

Successful analysis adds value—to the information itself, to institutional knowledge, to fellow intelligence professionals, to the process, and to the institution or unit itself—in terms of reputation and the degree to which good analytic practices endure despite changes in target, consumer, and personnel. Successful analysts are those whose work, whenever possible goes to the level of making judgments or estimating.

What role does management play in ensuring analytic success? First and foremost, management effectively uses financial and political capital to ensure that analysts have access to consumers, and to the resources they require to answer those consumers’ intelli- gence needs. This includes the organization of the work itself, allocation of materiel and personnel, and coordination with consumers and other producers. When management is successful, the analyst has the necessary tools and the correct information for successful intelligence analysis. Good morale among analytic personnel becomes an indicator of effective management. A good understanding of the unit’s mission and the analysts’ own satisfaction with his or her performance naturally produces a feeling of empowerment and a belief that the organization places great value on analytic talent.

Intelligence Product

The products of successful analysis convey intelligence that meets or anticipates the consumer’s needs; these products reveal analytic conclusions, not the methods used to derive them. Intelligence products are successful if they arm the decisionmaker, policy- maker or military leader with the information and context—the answers—needed to win on his or her playing field.

Readiness: Intelligence systems must be responsive to existing and contingent intelligence requirements of consumers at all levels.
Timeliness: Intelligence must be delivered while the content is still actionable under the con- sumer’s circumstances.

Accuracy: All sources and data must be evaluated for the possibility of technical error, misper- ception, and hostile efforts to mislead.

Objectivity: All judgments must be evaluated for the possibility of deliberate distortions and manipulations due to self-interest.

Usability: All intelligence output must be in a form that facilitates ready comprehension and immediate application. Intelligence products must be compatible with the consumer’s capabilities for receiving, manipulating, protecting, and storing the product.

Relevance: Information must be selected and organized for its applicability to a consumer’s requirements, with potential consequences and significance of the information made explicit to the consumer’s circumstances.

Measures of success for intelligence products28

Six “underlying ideas or core values” for intelligence analysis, identified by William Brei for operational-level intelligence, and shown in the figure above, establish the analyst’s “essential work processes.”29 Since they are defined in terms of the consumer, they also can be used as a checklist to rate the quality of products provided to the consumer.

William S. Brei, Captain, USAF, Getting Intelligence Right: The Power of Logical Procedure, Occasional Paper Number Two (Washington DC: Joint Military Intelligence College, 1996), 6.

  • Was the reported intelligence accurate? (Accuracy)
  • Are there any distortions in the reported judgments? (Objectivity)
  • Is the reported intelligence actionable? Does it facilitate ready comprehension? (Usability)
  • Does it support the consumer’s mission? Is it applicable to the consumer’s requirements? Has its significance been made explicit? (Relevance)

Brei asserts that accurate data provide the foundation for subsequent objective judgments, and the expression of objective judgments in a usable form provides much of the basis of a relevant product. Thus, unverified data cannot only cost an intelligence product its Accuracy, but also damage its Relevance to the customer.32

Brei’s principles provide a means for evaluating a given intelligence product based on the meaning it conveys and the value of that intelligence to the consumer. His approach, when combined with an “insider’s” view of the intelligence production process, analytic methods and personnel management practices, makes a comprehensive evaluation of intelligence analysis appear possible

CHARACTERISTICS OF SUCCESSFUL INTELLIGENCE ANALYSTS

A sophisticated intelligence analyst is one who is steeped in the history and culture of a region, has lifelong interest in the area, and approaches the study of the region as a professional responsibility, and probably as an avocation as well.

— Ronald D. Garst and Max L. Gross34

Who are the most successful intelligence analysts? What makes them successful? In setting forth the functional core competencies for successful intelligence analysis we observe there are characteristics which, while not necessary for successful intelligence analysis per se, do seem to be associated with analysts considered to be the most success- ful at their trade.35

Probably the most indispensable characteristics of successful intelligence analysts are high self-motivation and insatiable curiosity. Analysts want to know everything they can about the objects under their scrutiny. Reading and observing voraciously, they ferret out every possible piece of information and demonstrate a sense of wonder about what they discover. As new fragments appear, novel connections are discovered between the new and older information as a result of intense concentration leading to epiphanous moments of “aha” thinking. The most successful analysts tend to enjoy their work—“It’s play, not work.” Indeed, they often will stay late at the office to pursue a thorny problem or an engaging line of reasoning.

Employee orientation programs that acknowledge these characteristics may be most successful in initiating new employees into the analytic culture. When personal characteristics are embodied in compelling “war stories” told by mentors and peers, they can reinforce the cultural values of the agency, building corporate loyalty by reinforcing the sense of membership.

ABILITIES REQUIRED FOR INTELLIGENCE ANALYSIS

The competent intelligence analyst must have a unique combination of talents.

— Ronald D. Garst and Max L. Gross37

Abilities arise from aptitudes that can develop from an individual’s innate, natural characteristics or talents. Although aptitudes may largely be determined by a person’s genetic background, they may also be enhanced through training.38

Communicating

Teaming and Collaboration

Teaming and collaboration abilities enhance intelligence analysis, since the analyst’s relationship with consumers, peers, subordinates, and supervisors shapes the intelligence production process. Formalized means of enhancing all these abilities can lead intelligence professionals to considerably greater effectiveness as analysts and leaders of analysts. This is why the Director of Central Intelligence has indicated that collaboration is a cornerstone of strategic intelligence.41 A collaborative environment also minimizes the likelihood of intelligence failures.

We identify four distinct teaming abilities, to show the complexity of the concept. Typically, formal training programs address leadership abilities only in the context of the management function; here, we focus on the analysis process itself.

Influencing: Those with this ability effectively and positively influence superiors, peers, and subordinates in intelligence work. Analysts often need to persuade others that their methods and conclusions are valid, and they often need to leverage additional resources. The ability to influence determines the level of success they will have in these areas.

Leading: Those who are more senior, more skilled, and more successful in intelligence analysis have an obligation to lead, that is, to direct others and serve as role models. The ability to lead involves working with and through others to produce desired business outcomes. Thus, developing leadership abilities enhances the field of intelligence analysis.

Following: Almost every grouping of humans has a leader. Everyone else is a follower. Analysts must enhance their abilities to work within a team, taking direction, and act-ing on it.

Synergizing: Drawing on the other three teaming abilities, players in the intelligence process cooperate to achieve a common goal, the value of which is greater than they could achieve when working alone.

Thinking

As our species designation—sapiens—suggests, the defining attribute of human beings is an unparalleled cognitive ability. We think differently from all other creatures on earth, and we can share those thoughts with one another in ways that no other species even approaches.

— Terence W. Deacon, The Symbolic Species.43

Intelligence analysis is primarily a thinking process; it depends upon cognitive func- tions that evolved in humans long before the appearance of language.44 The personal characteristics of intelligence analysts are manifested in behaviors that reflect thinking and/or the inherent drive to think. Our national survival may depend on having better developed thinking abilities than our opponents.

Information Ordering: This ability involves following previously defined rules or sets of rules to arrange data in a meaningful order. In the context of intelligence analy- sis, this ability allows people, often with the assistance of technology, to arrange infor- mation in ways that permit analysis, synthesis, and extraction of meaning. The arrangement of information according to certain learned rules leads the analyst to make conclusions and disseminate them as intelligence. A danger arises, however, in that such ordering is inherently limiting—the analyst may not look for alternative explanations because the known rules lead to a ready conclusion.

Pattern Recognition: Humans detect patterns and impose patterns on apparently ran- dom entities and events in order to understand them, often doing this without being aware of it. Stellar constellations are examples of imposed patterns, while criminal behavior analysis is an example of pattern detection. Intelligence analysts impose or detect patterns to identify what targets are doing, and thereby to extrapolate what they will do in the future. Pattern recognition lets analysts separate “the important from the less important, even the trivial, and to conceptualize a degree of order out of apparent chaos.”45 However, imposing or seeking patterns can introduce bias. Analysts may impose culturally defined patterns on random aggregates rather than recognize inher- ent patterns, thereby misinterpreting the phenomena in question.

Reasoning: The ability to reason is what permits humans to process information and formulate explanations, to assign meaning to observed phenomena. It is by reasoning that analysts transform information into intelligence, in these three ways:

  1. Induction: Inductive reasoning combines separate fragments of information, or specific answers to problems, to form general rules or conclusions. For example, using induction, a child learns to associate the color red with heat and heat with pain, and then to generalize these associations to new situations.46 Rigorous induction depends upon demonstrating the validity of causal relationships between observed phenomena, not merely associating them with each other.
  2. Deduction: Deductive reasoning applies general rules to specific problems to arrive at conclusions. Analysts begin with a set of rules and use them as a basis for interpreting information. For example, an analyst who follows the nuclear weapons pro- gram of a country might notice that a characteristic series of events preceded the last nuclear weapons test. Upon seeing evidence that those same events are occur- ring again, the analyst might deduce that a second nuclear test is imminent.47 How- ever, this conclusion would be made cautiously, since deduction works best in closed systems such as mathematics, making it of limited use in forecasting human behavior.
  3. Abduction: Abductive reasoning describes the thought process that accompanies “insight” or intuition. When the information does not match that expected, the analyst asks “why?,” thereby generating novel hypotheses to explain given evidence that does not readily suggest a familiar explanation. For example, given two ship- ping manifests, one showing oranges and lemons being shipped from Venezuela to Florida, and the other showing carnations being shipped from Delaware to Colombia, abductive reasoning is what enables the analyst to take an analytic leap and ask, “Why is citrus fruit being sent to the worldwide capital of citrus farming, while carnations are being sent to the world’s primary exporter of that product? What is really going on here?” Thus, abduction relies on the analyst’s preparation and experience to suggest possible explanations that must then be tested. Abduction generates new research questions rather than solutions.48

SKILLS REQUIRED FOR INTELLIGENCE ANALYSIS

Any institution that relies on professionals for success and seeks to maintain an authentic learning climate for individual growth must require its members to read (to gain knowledge and insight), research (to learn how to ask good questions and find defensible answers), discuss (to appreciate opposing views and subject their own to rigorous debate), and write (to structure arguments and articulate them clearly and coherently).

Critical Thinking

It is by thinking that analysts transform information into intelligence. Critical think- ing is the cognitive skill applied to make that transformation. Critical thinking can be defined as

[An] intellectually disciplined process of actively and skillfully conceptualizing, applying, analyzing, synthesizing, and/or evaluating information gathered from, or generated by, observation, experience, reflection, reasoning, or communication, as a guide to belief and action….Thinking about [our] thinking while [we’re] thinking in order to make [our] thinking better.50

There is a clear need to educate and train intelligence analysts to use their minds…[Only] by raising their awareness can the intelligence unit be assured that the analysts will avoid the traps in being slave to conformist thought, precedent and imposed cultural values—all enemies of objective analysis.51

An ordered thinking process requires careful judgments or judicious evaluations leading to defensible conclusions that provide an audit trail. When the results of analysis are controversial, subject to alternate interpretations, or possibly wrong, this audit trail can prove essential in defending the process used to reach the conclusions.

Foreign Language Proficiency

Furthermore, foreign language proficiency provides more than just a translation of non-English materials. The structure of a target’s language and that target’s culture are closely related. One well-known theory of this relationship, by Edward Sapir and Benjamin Whorf, posits that “language is a force in its own right and it affects how individuals in a society conceive and perceive reality.”64 Thus concepts essential to understanding the target are communicated in a context that goes beyond simplistic translation.

Context must determine the translation, and an analyst lacking foreign language skills must trust the linguist to correctly understand that context. The expertise required for that understanding might render the linguist a better intelligence analyst than the original analyst. This begs the question: “Is such duplication of personnel affordable?”

Research

Research skills provide discipline and consistency for the creation of value-added intelligence. By providing methodologies for defining the requirement to be answered, as well as methodologies for answering that query, research skills ensure analytic consistency and enable thorough exploration of the issues. Necessary research skills include methods of problem definition that ensure that, in collaboration with the consumer, analysts correctly define or redefine the problem in terms of a “research question,” so as to understand the consumer’s and the analyst’s own objectives.

Information Gathering and Manipulation

Information is the grist for intelligence analysis, and to be successful, analysts must aggressively seek it out. Different information/data manipulation skills are required for the various stages of the intelligence process.

  • Collection: This stage involves gathering information from all sources. The intelligence analyst directs the collection process, causing specific resources to be tasked. Related information manipulation skills include selecting and filtering in order to assess whether the information and its sources are of value.
  • Monitoring: Reliability of sources and the validity of the information are always in question. Monitoring skills focus on information review, and often may involve analy- sis of descriptors and summaries of that data.
  • Organizing: Skillful arrangement, formatting, and maintenance of data for analysis and technical report generation ensure access to materials in a usable format.
  • Analysis/Synthesis: Information manipulation skills can point to patterns, relation- ships, anomalies and trends.
  • Interpretation: This is the stage in the process where information is transformed into intelligence by cognitive manipulation, that is, assigning meaning to analyzed and syn- thesized information using critical thinking. Computers aid in this step, however, a study of 12 major “analytic” software tools concludes “true analysis will remain a peo- ple function, assisted by computer technology.”68
  • Dissemination: Dissemination, except for some graphic products, is now of course mostly electronic. Information preparation and presentation skills allow its transforma- tion and publication, so that the results of analysis appear in usable formats, which may be further tailored by users.
  • Coordination: Coordination requires analysts as well as their managers to employ “collegial” skills in the bureaucratic environment; these skills are also needed to avoid diluting the intelligence message down to the “lowest common level of agreement.”
  • Evaluation: Internal and intra-community evaluation allows the intelligence to be dis- cussed and placed in larger contexts than that viewed by a single agency. Such collab- oration may also identify the additional intelligence required to clarify issues. Evaluation can become a continuous part of the production process.69

Project/Process Management

Few analysts enjoy the luxury of working full time on only one problem or on one aspect of a particular problem. We distinguish between projects and processes. The former tend to have finite scope and goals whereas the latter are open-ended. Both require planning, imple-

mentation, monitoring, and negotiating skills.70 A project/process plan defines and clarifies what needs to be accomplished; identifies necessary resources; creates a timeline, including milestones; and makes the analyst accountable for successful completion.

KNOWLEDGE REQUIRED FOR INTELLIGENCE ANALYSIS

Without a solid knowledge base concerning the region or issue to which the analyst is assigned, . . . the individual will not even know what questions to ask. That is, the person will not really be qualified to be called an “analyst.”

— Ronald D. Garst and Max L. Gross71

Knowledge consists of familiarities, awareness, or understanding gained through experience or study; it includes both empirical material and that derived by inference or interpretation.72 Depending on the specific target, the knowledge required can vary widely.

Target Knowledge

Doing intelligence analysis in the information age is often like “being driven by some- one with tunnel vision.”73 In the quest to answer a consumer’s questions, the analyst often pushes aside “all the fuzzy stuff that lies around the edge—context, background, history, common knowledge, social resources.”74 Yet, to do so is perilous, for these provide bal- ance and perspective. They offer breadth of vision and ultimately allow analysts to make sense of the information under study. By providing the context into which analysts place their work, fields of study such as anthropology, comparative religion, economics, geography, history, international relations, psychology, and sociology all interact to contribute vital knowledge about the target, which both analysts and consumers need to understand. Changes in the culture, religion, geography, or economic systems (among others) of a tar- get may themselves be subjects of an intelligence requirement.

The following selection of topics exemplifies some non-traditional but essential target knowledge areas required for thorough intelligence analysis.

Culture: Culture can be defined as a group’s values, standards, and beliefs. In turn, culture defines that group. The study of culture reveals the roles of individuals in the community, and how they relate to non-members of the culture. This provides insights into behaviors that are of value in predicting future behavior. This is true when the tar- get is a people or a nation as well as when the target is a specific subgroup or individual member within a culture. Adda Bozeman points out that because political systems are grounded in cultures, “present day international relations are therefore by definition also intercultural relations … [A]nalysts and policymakers in the West would be more successful in their respective callings if they would examine the cultural infra- structures of the nations and political systems they are dealing with.”

Message of Language: The message of language is a part of culture, and while isolating it makes an artificial distinction, we do so to reiterate its importance for intelligence analysis. What languages are utilized, by whom, and in what context, is essential in understanding the target’s culture. For example, much is revealed if members of an insurgent group primarily communicate using the language of the elite members of their culture. Additionally, what the language indicates about class and personal relationships may provide clues to behaviors.

Technology: Technology itself can be the subject of study by the intelligence analyst. Someone developing a target may analyze specific technologies and their infrastructure as they pertain to that target. Further, the role of technology within a region, nation, or people is an indicator of behavior. The domains of communications, utilities, transportation, manufacturing, and others, as well as the attitudes of the people to them, are rich sources of study. Technology also can provide insights into sources of information that will be available to the intelligence analyst.

Professional Knowledge

In addition to understanding their targets, intelligence analysts need to know a great deal about the context and nature of the intelligence profession, and the resources available to help them do their job well. Understanding the plans and policies of their own government enables analysts to frame their work in terms of the nation’s strategic and tactical objectives. Intelligence consumers are government officials. Their needs drive analytical process and priorities. Analysts base collection tasking on the imperative to match information sources to consumer needs. These information sources, such as human-source reporting, signals intercepts and documentary research, provide the analyst with raw materials for the creation of intelligence through analysis, synthesis and interpretation.

In addition, analysts need to know what specific sources of information relevant to a particular inquiry are available for exploitation. Knowing which expert sources and subject matter experts can guide the analytic process, or can offer different or additional perspectives, enhances intelligence work. The reliability of these sources is also critical. When different sources provide contradictory information, the reliability of one source versus another may provide insights into which information is accurate; the sources may be open or secret, technical or human.

Finally, others, known and unknown, may be examining similar information for the same or different consumers. Awareness that sources of information, possibly vital information, exist, even though they remain undiscovered or untapped, keeps the analyst constantly seeking out new connections.

IMPLICATIONS FOR THE INTELLIGENCE ANALYSIS WORKFORCE

Returning to our thesis, what makes an intelligence analyst successful? Given that the analyst’s purpose is to create intelligence, success means following an effective process (rigorous analysis, sound management) and creating a quality product (that conveys intelligence and meets the consumer’s needs). To do this requires appropriate abilities, knowledge and personal characteristics for rigorous intelligence analysis and production. Well- honed capabilities to communicate, cooperate and think, coupled with the skills that ensure technical competency, provide the means for intelligence work. Informed, deep knowledge of the issues and their background provides both content and context for analysis. Analysts who are motivated to succeed, to know targets, and to share that knowledge ensure that consumers receive intelligence of the highest caliber.

IMPLICATIONS FOR THE INTELLIGENCE ANALYSIS WORKFORCE

[Much] of the work of the intelligence community is highly specialized and requires exceptional creativity…. It is also safe to say that some of the most pressing analytic skills the community will require are precisely those we cannot even foresee at this time.

— Bruce D. Berkowitz and Allan E. Goodman79

a possible uptick in hiring and rising rates of eligibility for retirement mean that, at the least, the savvy of the analytic population will continue to dwindle at the lower end and retire from the upper end.80 Even an adequately sized analytic workforce, lacking adequate mentoring and training from senior, expert analysts, will leave the Intelligence Community unable to meet security challenges.

According to the Council on Foreign Relations’ independent report on the future of intelligence, “less than a tenth of what the United States spends on intelligence is devoted to analysis; it is the least expensive dimension of intelligence… This country could surely afford to spend more in those areas of analysis where being wrong can have major adverse consequences.”82 Winning the talent war requires smart investment in the hiring, training, and deployment of analysts.

training is of little value unless it can be immediately applied. Thus organizational structures, culture, and processes must be aligned to permit and to reward rigorous analysis. Unless analysts are recognized and appreciated for performing sophisticated analysis, they will not embrace change. Significant recognition for high-level analysis will inspire others to follow, creating a culture that fosters and sustains excellence in tailored intelligence production.

employees transferring into the analytic disciplines from other fields must have the prerequisite abilities and skills for analysis before joining this discipline. The field of intelligence analysis cannot safely be a catchall for employees transferring from downsized career fields.

The Aspin-Brown Commission on the Roles and Capabilities of the United States Intelligence Community identified several additional actions to improve the quality of analysis. These include a minimal prerequisite to visit target countries as part of analytic orientation, rewards for acquiring and maintaining foreign language proficiency, encour- agement to remain within substantive areas of expertise, and periodic rotational assign- ments to consumer agencies.89 Enacted as part of employee training and orientation, these measures can substantially enhance analysts’ target knowledge and skills.

Review of Cuba’s Intervention in Venezuela: A Strategic Occupation with Global Implications

Maria C. Werlau is the Executive Director of Cuba Archive – a non-profit organization incorporated in 2001 in Washington, D.C., whose mission is to promote human rights through research and information about the Communist Party takeover of Cuba and its subsequent oppression of political opposition. In her book Cuba’s Intervention in Venezuela: A Strategic Occupation with Global Implications Werlau provides a theoretical framework and extensive documentation with which to justify her claim that that “revolutionary” Cuba essentially occupied Venezuela. This occupation occurred through asymmetric measures rather than a conquering military force, i.e. the strategic placement of assets which could be used to monitor, command and control Venezuela’s security forces, economy, information, communications, and society in general. It also explores the evolution of a longstanding ambition of Fidel Castro’s – to unite Central America, South America and the Caribbean into a confederation under his leadership and how the political alliance with Hugo Chavez and their regional integration project, ALBA, operating in conjunction with an international criminal network organized to support their political goals supports this end goal. Cuba’s Intervention in Venezuela, along with other books such as Why Cuba Matters: New Threats in America’s Backyard, La Franquicua Cubana: Una Dictradura Cientifica and others, shows how  Cuba, with its smaller population and economy, was able to take over Venezuela thanks to a highly trained intelligence community and a unique methodological tool kit first transferred by the Soviets and refined through decades of operations in Cuba and abroad. As the book implies by the title, Werlau also shows how implications of this occupation is not just regional, but global.

The book’s first chapter covers Fidel Castro’s dream of a united Latin American and details plans enacted by the Cuban Communist Party to establish armed political fronts, training of foreign political operatives in support roles for the Cubans, as well as developing collaborative relationships with the leaders of Socialist and Communist parties in Venezuela as well as other parts of Latin America, the Caribbean, and Africa. Citing an interview with former Captain of Cuba’s General Directorate of Intelligence, Enrique Garcia, Cuba efforts financed by the Soviet Union included espionage, penetration of organizations, and buying influence via substantial cash payments to presidential candidates all throughout these regions.

Chapters two through six cover an incredibly large number of diverse and detailed examples of the legal, economic, military and political cooperative agreements, accords, treaties and charters between Venezuela and Cuba as well as their social impact. Coverage of this judicial, economic, political, technological, national security and social intertwining takes a hundred pages. It shows how Cuban advisors in the military, economy, and social fields has come to play huge roles in both monitoring and managing the country. Reading these chapters makes the calls by the Venezuelan opposition for the expulsion of all Cuban contractors from Venezuela more than understandable – several thousand people in key roles are able to function as an occupying force when they are able to collude with a domestic party, the PSUV, that wishes to have their country turned into Cuba. Citing Lt. Col Juan Reinaldo, who for 17 years was a member of Fidel Castro’s personal security detail, she shows how there is no doubt that Cuba’s expertise in totalitarian social control was transferred to the PSUC via training and placement of advisors throughout Venezuela’s armed forced and key social and economy sectors.

Chapter 7, A Post-Cold War Continental Project, opens with warnings by General Guaicapuro Lameda, the former president of PDVSA, published shortly after a trip to Venezuela intended to indoctrinate him into the Fidel Castro’s plans for ensuring Hugo Chavez and the PSUV would stay in power and radically transform the content over a period to last no less than 30 years. Fidel’s recipe for Venezuela included:

  1. Let those who dislike the revolution leave
  2. Keep people busy covering their basic needs and repressed
  3. Burn non-essential oil money to buy loyalties and disable opponents
  4. Find or create a credible and powerful enemy
  5. Keep the poor impoverished but hopeful
  6. Corner the opposition
  7. Establish a parallel/dual economy: one for the government’s purposes (for the poor), the other one, unattainable and unbearable, for the opposition
  8. Implant terror: among supporters, the fear of losing what the government gives; among opponents, the fear of losing what they have, including their life and freedom
  9. Make it difficult to do things legally in order to keep people tied up, compromised, dominated, and disabled
  10. Galvanize hope through elections

This blueprint for the creation of a new historical block appears inspired by Antonio Gramsci, the political philosopher and founder of the Italian Communist Party, who Chavez stated that he had “an excellent knowledge of” and is frequently cited within PSUV materials such as their Red Book. Venezuela’s transformation into the barrack-style Communism of Cuba with ‘more frills’ was the first big step towards their post-Cold War Continental Transformation Project called Nuestra America.

In addition to these policies which increased political polarization, marginalized parties organized around traditional democratic values, attitudes and beliefs, Venezuela’s educational model transformed to align with based on Cuba’s indoctrination/education system. More than 20 million books for all levels of the Venezuelan education system were published in Cuba that taught a Marxist worldview and highlighted how Simon Bolivar, the greatest Venezuelan, had a Cuban nursemaid.

Chapter 8, A Criminal Network with Extra Regional Ties, continues where the last pages of Chapter Seven began to address – the numerous political and criminal groups that have come to partner with the Chavez/Maduro/Castro governments. One of the admirable qualities of this book is its citation of sources – from personal interviews to judicial proceedings, memoirs published by Cuban and Venezuelan regime insiders, etc. In this section highlighting how Cuba helped make Venezuela into a mafia state through support of narco-trafficking, the rich source list here is worth the cost of the book.

In this section, Werlau links the interests of transnational criminal organizations with the political policies promoted by the Foro de Sao Paulo. Embezzlement of public funds, drug proceeds and disruptive targeting by media and criminal actors enables high public officials, prosecutors, judges, journalists to be bribed or threatened. Targets are, basically, given the choice of plata (money) or plomo (bullets). Some of the highlighted acts include state-sanctioned bribery rings, money laundering rings, schemes to sell passports, overbidding, underperforming, direct and indirect kickbacks, and providing safe haven for designated terrorist organizations.

Links with non-national organizations, such as Hezbollah, IRA, ETA, and others are covered along with Iran, China, and Russia – which provide loans that somehow never make it to the national accounts. She describes complex networks that use a diverse criminal portfolios to undermine the rule of law, democratic governance, and US alliances throughout the Western Hemisphere.

Chapter 9, Cuba’s Core Competency: Soft Power on Steroids, links together the activities described in the preceding chapters and explains how they all demonstrate a comprehensive effort at organizational and political technology transfer. What’s been ongoing since the first election of Hugo Chavez – just as Fidel Castro and Hugo Chavez admit in their published statements – is an effort from the outside in to transform Venezuelan society.

Werlau summarizes this as follows:

“Cuba’s core competency, or comparative advantage, is rooted in the centralized command-and-control totalitarian nature of the system, unconstrained by judicial, ethical, and moral boundaries or by term limits, balance of powers, transparency, accountability, and bureaucratic-institutional rules and restraints that characterize even the weakest democracy. The Cuban Politburo and Communist Party leadership has absolute power to strategize with great consistency as well as ample flexibility to use even the most unsavory tactics. It can act very quickly without a need for consultation and it can plan for the long term as well as wait patiently, as its power is enduring, it does not have to face electoral challenges or term limits.

Werlau notes that Cuba is an innovator in this field, as this all predates Russia’s Gerasimov Doctrine of 2013 –  which states that the rules of war have changed and the military should embrace hybrid and asymmetrical actions and nonmilitary means to achieve military and strategic goals, combining the use of special forces with information warfare to create a permanently operating front through the entire territory of the enemy state. I would, however, disagree with this assessment as this is a variation of the same sort of tactics that the Soviets/Russians first developed following the professionalization of the CCCP and they were the ones that transferred their knowledge to Cuba.

Citing interviews from a Cuban Intelligence Officer that defected, Werlau describes how Cuba’s Directorate of Intelligence systematically penetrates governments, international organizations, media, academia, and all of society in select countries – particularly the United States. She highlights a number of these organizations – such as the ICAP, Prensa Latina, and organizations targeted for influence such as LSA and CLACSO. Highlighting the role of U.N. in facilitating these activities, she highlights how Cuba has the second highest per capita number of people given diplomatic credential, credentials which enable ambassadors and their staff to engage in espionage and recruitment in other countries with a ‘get out of jail free’ card.

Following further details on similar events, Werlau highlights how Venezuela has come to enact policies that function as asymmetrical warfare, such as forced migration as state policy, creation of export-based criminal networks, and a state-supported human trafficking business.

Chapter Ten, The International Response, highlights how as a result of such an extensive intelligence apparatus that the international response to Cuba’s occupation of Venezuela at the invitation of Hugo Chavez and then Nicolas Maduro has largely been muted. It’s all the more understandable given the large number of individuals, networks, groups, companies, and countries involved. Werlau’s book, however, does a great job in organizing this material to present a holistic picture of just what asymmetric warfare can accomplish – the takeover of a state government without military battle and only military-supported intelligence agents.

Review of Sensemaking: A Structure for an Intelligence Revolution

Sensemaking: A Structure for an Intelligence Revolution

By David T. Moore – NATIONAL DEFENSE INTELLIGENCE COLLEGE WASHINGTON, DC

March 2011

FOREWORD

Gregory F. Treverton
Director
RAND Corporation Center for Global Risk and Security

We at NGA used to look for things and know what we were looking for. If we saw a Soviet T-72 tank, we knew we’d find a number of its brethren nearby. Now, though, we’re not looking for things. Instead, we’re looking for activities or transactions. And we don’t know what we’re looking for.

In fancier language, the paradigm of intelligence and intelligence analysis has changed, driven primarily by the shift in targets from the primacy of nation-states to trans-national groups or irregular forces. In the world of the national-state, I and others divided intelligence problems into puzzles and mysteries (or variants of those words).1 Puzzles are those questions that have a definitive answer in principle. How many nuclear missiles the Soviet Union had was a puzzle. So is whether Al Qaeda possesses fissile material. By contrast, mysteries are questions that cannot be answered with certainty. They are future and contingent.

For puzzles, intelligence tried to produce the answer.

For mysteries there was no answer. Instead, analysts sought to frame the mystery by providing a best estimate, along, perhaps, with excursions or scenarios to test the sensitivity of critical factors. If intelligence failed to understand the full picture of Soviet missiles, and puzzle became mystery, it at least knew something about where to look: there was experience and theory about missile building, plus historical experience of Soviet programs. The mystery came with some shape.

However, today’s transnational threats confront us with something more than mysteries. I call these shapeless mysteries-plus “complexities,” borrowing Dave Snowden’s term. They are sometimes called, as Moore notes, “wicked problems” or simply “messes.” The come without history or shape. Large numbers of relatively small actors respond to a shifting set of situational factors. Thus, they do not necessarily repeat in any established pattern and are not amenable to predictive analysis in the same way as mysteries. Those characteristics describe many transnational targets, like terrorists—small groups forming and reforming, seeking to find vulnerabilities, thus adapting constantly, and interacting in ways that may be new.

For complexities, especially, the challenge is to employ sensemaking— the term is from Michigan psychologist, Karl Weick. Exactly how to accomplish sensemaking is a task that still mostly lies before us, which makes this book such an important contribution. Sensemaking departs, as Moore notes, from the postwar tradition of Sherman Kent, in which analysis meant, in the dictionary’s language, “the process of separating something into its constituent elements.” Sensemaking also blurs America’s bright white line between intelligence and policy, for, ideally, the two would try to make sense together, some- times disaggregating events, sometimes aggregating multiple perspectives, always entertaining new hypotheses, all against the recognition that dramatic failure (or success) might occur at any moment.

he [David More] is very careful about classification. That means the visible trails of his practice in his scholarship are sparse, and his cases are mostly familiar ones, albeit ones often spun in new directions.

The new paradigm makes the use of machines and method imperative, letting machines do what they do best—searching large amounts of data, remembering old patterns, and the like—while letting humans use the judgment they alone can apply. Yet the tests by Moore and his colleagues remind us that methods are critical but only if they have been tested. It turns out, for instance, that ACH, analysis of competing hypotheses, a method more frequently used now and one that has been tested, isn’t all that valuable, at least not for analysts beyond the novice level.

COMMENTARY

Anthony Olcott, PhD
Associate, Institute for the Study of Diplomacy Georgetown University

David Moore is right to talk of the need for an intelligence revolution. However, as Lenin learned in the 18 years that passed between publication of The Development of Capitalism in Russia and taking over the Winter Palace, it takes more than a diagnosis and a prescription to make a revolution. Although his is among the best, Moore’s book is also but the latest addition to a groaning shelf of books devoted to intelligence and analytic reform while the companion shelf, for books on how to improve the policy process, sits dusty and all but empty. In that regard, even though Moore’s discussion of the processes of analysis and how the ways we answer questions might be improved is one of the strongest in recent memory, the most valuable part of the book could well be the somewhat smaller amount of attention it devotes to the problem of how we formulate our questions in the first place.

Kendall did not share Kent’s conviction that the job of the analyst was “to stand behind [the policymakers] with the book opened at the right page, to call their attention to the stubborn fact they may be neglecting.”4

Moore has done a deep and convincing job of diagnosing the ills of the IC, and has proposed a rich and promising cure. This, as Hilton points out, is an extended act of cognition. What lies between this book and Moore’s revolution, however, is the need to have others come to the same conclusion— which, as Hilton points out, requires communication, not cognition.

Sixty years ago a small group of analysts—dubbed “Talmudists” for their pains—worked out a complex, sophisticated method of deriving action- able intelligence from the tightly controlled propaganda outlets of the USSR and Mao’s China. This let IC sinologists spot the first signs of the Sino-Soviet split as early as April 1952, and by 1955 Khrushchev had been tagged as the likely winner in the struggle to consolidate power in the Kremlin after Stalin’s death. Those early indicators, however, remained scoffed at and un-acted upon precisely because the methodology—which a colleague in the CIA compared to studying “invisible writing on slugs”6—was too complex and too weird to be easily explained to policymakers—who, in any case, already believed other hypotheses, and had their own “facts.” 7

(6) Richard Shryock, “For An Eclectic Sovietology,” Studies in Intelligence, vol. 8, no. 1 (Win- ter 1964).

COMMENTARY

Emily S. Patterson, PhD Assistant Professor College of Medicine
The Ohio State University

it is an achievable goal for the vast majority of United States policy to be directly informed by evidence that is systematically validated, collated, and synthesized by teams of professional intelligence analysts.

This book is a critical milestone in attaining the goal of analysis directly supporting evidence-based policymaking. This book’s primary contribution is to conduct sensemaking on the label sensemaking. Decades of relevant academic literatures have been synthesized into one framework that illustrates how disparate research streams relate to each other and to the framework. Until now, there has not been such an extensive effort to pull together related research on sensemaking from such diverse disciplines as psychology, political science, philosophy, organizational science, business, education, economics, design, human-computer interaction, naturalistic decisionmaking, and macrocognition.

The contributions of this book go beyond a literature review, how- ever, in that an action-oriented stance is taken toward capturing nuggets of insight on how to improve aspects of analysis. The categories themselves are useful in putting some shape and structure to the amorphous value that expertise brings to creating a solid analytic product in an uncertain world: planning, foraging, marshaling, understanding, and communicating. Of par- ticular value is describing different aspects of validation that are relevant to intelligence sensemaking, and distinguishing processes for predicting future events (foresight) from processes for describing past events and assessing their impacts (hindsight).

look at what is measured [to] operationally (to) determine how people truly define a concept.

Even if high rigor is not possible under extreme time pressure, data overload, and workload conditions, the measure has potential value in supporting negotiations for what aspects are most important to do well for a given task, as well as communicating the strengths and weaknesses of the process behind an analytic conclusion.

COMMENTARY

Christian P. Westermann
Senior Analyst
Bureau of Intelligence and Research U.S. Department of State

History will tell us if current intelligence reforms are evolutionary or revolutionary, but the Intelligence Community is responding to mandated change brought about by the 2004 Intelligence Reform and Terrorism Prevention Act (IRTPA).8 In particular, the analytic and collector communities are adjusting to one of IRTPA’s pillars—improved information sharing. As reforms unfold, the collector and analyst must adapt to new rules and new analytic standards, and incorporate more methodologies, techniques, and alternatives in their analysis, in collaboration with managers and tradecraft cells in the national intelligence organizations. These new structures and guidelines present an intellectual challenge as well as a bureaucratic maze for the collector and analyst struggling not only to “produce” intelligence in a timely fashion but also to improve their product. This is not easy for the intelligence professional because time is not on their side. This is why improving the way in which all analysts think is so important and why an understanding of sensemaking will help advance the profession beyond the “established analytic paradigm” for complex problems and create greater possibilities for the application of imagination in the IC. The failure to properly assess Saddam Hussein’s WMD programs during the lead-up to Operation Iraqi Freedom is the preferred example of this failure to imagine alternatives. The corporate solution to this problem is increased collaboration and information sharing; David Moore is not in disagreement but has suggested that it must go beyond new methodologies or techniques—it must be done with a strong sense of rigor and individualism in one’s thinking.

David Moore has written for the Intelligence Community a revolutionary epistemology. His novel construct for intelligence professionals is the foundation for a philosophy of intelligence.

Moore’s prescription is to take the disaggregation of data, commonly referred to as analysis, synthesize it, and then apply to it one’s interpretation and communication skills to make sense of the information. Sensemaking therefore is a theory of knowledge for the intelligence professional and also a practice to aid the difficult art of intelligence reasoning.

His attention to revolutionary change in the art of intelligence thinking grows from his recognition that organizational reform has been ongoing for decades and despite those changes attendant failures have occurred and continue to occur. Therefore the only hope for achieving positive reform rests with changing the practice of intelligence whereby the individual collector and analyst, working together, and accepting the responsibility to think critically but also independently and across the Community, make sense of the 21st century national security environment.

COMMENTARY

Phil Williams, PhD
Director, Matthew B. Ridgway Center for International Security Studies Wesley W. Posvar Chair of International Security
Graduate School of Public and International Affairs
University of Pittsburgh

Moore’s Law for Intelligence

Any book that discusses amongst other things, red brains and blue brains, kayaking, information foraging, flashlights as blindfolds, space-time envelopes, and intellectual audit trails, is out of the ordinary. When you throw in the contention by the author that intelligence as currently practiced is akin to medicine in the 14th Century you have a book that will raise hackles, blood pressure, and voices.

This is not an easy read. But the overall thesis is straightforward and compelling: the environment within which the U.S. intelligence community now finds itself is not only highly complex but also full of wicked problems. To provide the kind of intelligence that is useful, relevant, and helpful to policy makers who have to anticipate and respond to these problems and challenges, Moore argues that the traditional paradigm developed largely by Sherman Kent has to be superseded by a new paradigm based largely on ideas initially outlined by Willmoore Kendall, a contemporary critic of Kent. The original Moore’s Law11 was narrowly technical; David Moore in contrast argues that a complex environment full of mysteries, not puzzles, requires holistic thinking (as opposed to simply disaggregation of problems), mindfulness (as opposed to mindlessness which he also elucidates), and a dynamic willing- ness to change paradigms, shift perspectives, and abandon strongly held perceptions. The book also develops the notion of sensemaking rigor and shows how metrics of rigor can be applied to several studies examining the rise and impact of non-state actors.

David Moore’s analysis is important and deserves to be widely read in the intelligence community and in the academic world.

it would have been helpful, for example, if David Moore had considered more explicitly the argument by David Snowden that making sense of a complex environment requires probing the environment. Further thought about this suggests that law enforcement is particularly good at this form of knowledge elicitation and sensemaking: sting operations, controlled deliveries, infiltration of criminal organizations, are all probing mechanisms that can contribute significantly to an increased level of understanding and, concomitantly, to an enhanced capacity for effective action. For many intelligence professionals, especially those who have had a dismissive view of law enforcement, the idea that law enforcement approaches to sensemaking might be ahead of those in the intelligence community, is likely to be as uncomfortable as most of the arguments in David Moore’s book. Certainly Moore’s volume is designed to shake and to stir. It is a manifesto for an intellectual revolution in the approach to intelligence and, as such, is likely to be both acclaimed and reviled.

PREFACE

On Being Mindful

What Is Mindlessness?

We are surrounded by errors and they are ours. Intelligence officials at the national level repeatedly use the same excuses for professional errors and for the systemic failures that follow. Despite directives to “fix” the structures, and most recently the means, by which intelligence is created, we insistently fail at our obligation to make early sense of vital threats and opportunities.

Ellen Langer, summarizing her pioneering social psychology research, finds mindlessness to arise from an over-reliance on “categories and distinctions created in the past.”12 She holds that such categories “take on a life of their own.”13

Langer also sees mindlessness arising from “automatic behavior.” Here, people rely on automatic responses as the basis for their behavior, as when one writes “a check in January with the previous year’s date.”14 By extension, intelligence professionals, in assessing sources, may develop a habit of discounting human intelligence sources because some are untrustworthy. As a result, they may miss novel insights because they use certain sources to the exclusion of others.

Finally, mindlessness can result from a failure to take into account alternative information that transcends our comfortable worldview. Langer observes that “[highly] specific instructions…encourage mindlessness” because they define what is acceptable and limit the viability of alternative signals that could lead to more accurate understanding of a phenomenon.

There were in fact 100 nuclear- tipped tactical missiles deployed on the island months before the arrival of the more infamous strategic missiles.17 A rigid notion of what constituted a nuclear missile, usually conceived as an offensive weapon, appears to have contributed to the case officers’ mindless disregard of the witnesses.

With respect to intelligence consumers, two faculty members at the International Institute of Management Development (IMD), corporate strategy expert Cyril Bouquet and corporate leadership and organization expert Ben Bryant, suggest that “decision makers often suffer from poor attention management, being obsessed with the wrong types of signals and ignoring possibilities that could significantly improve the fate of their undertakings.”18 They characterize these behaviors as fixation and relaxation. People who fixate “become so preoccupied with a few central signals that they largely ignore things at the periphery.”1

Bouquet and Bryant identify relaxation as when, after a “sustained period of high concentration,” people become unfocused on the task at hand and look to the ultimate goal.23

Sometimes ascribed to intelligence professionals’ and national consumers’ falling prey to “creeping normalcy,” relaxation was also a contributor to Israel’s failure to anticipate the attacks by Egypt and Syria in 1973.

In sum, mindlessness too often guides the assessment of affairs in too many domains, leading to errors, failures, and catastrophes. Mindless- ness is deemed unacceptable within the larger American society only when the resulting errors do lead to accidents and disasters. However, mindlessness is completely unacceptable within the domain of intelligence. One can never be certain in foresight whether errors will occur, so intelligence professionals must seek to anticipate, recognize and avoid them at all costs.

Attaining Mindfulness

The antithesis of mindlessness is mindfulness.

For Langer, a mind- ful state corresponds with: “(1) [aptitude for the] creation of new categories; (2) openness to new information; and (3) awareness of more than one perspective.”26 For example, as an intelligence professional considers who might be a member of Al Qaeda, a mindful attitude would involve constant reassessment and categorization of who might hold such membership— leaving the path open to new information for making sense of the organization and its membership. Thus, as we apply the idea that “[a] steer is a steak to a rancher, a sacred object to a Hindu, and a collection of genes and proteins to a molecular biologist,”27 the notion of a Nigerian male or even a blonde woman from Pennsylvania as a possible Al Qaeda affiliate would emerge from a mindful perspective.

Leadership scholar Deepak Sethi sees mindfulness as a “form of meditation” teaching “three simple-on-the-surface yet revolutionary skills: Focus, Awareness, and Living in the Moment.”28 This definition descends from millennia of Buddhist tradition. He argues that rather than an esoteric method it is “very practical, action oriented, and transformational.” Sethi believes that one practical way to bring about mindfulness is through the use of daily meditation, first using one’s breathing as a focus, and then using “specific daily activities such as meetings with another colleague.” However, the “real challenge [of employing mindfulness] is to take it from the meditation chair to the office chair and the real world.”29 Intelligence journeymen face this dilemma from a different perspective. They confront the real world and are challenged to contemplate their own thought processes as they engage it.

Ben Bryant and IMD research associate Jeanny Wildi write that mindfulness “involves the ability to accurately recognize where one is in one’s emotional landscape and allows…understanding, empathy, and capacity for accurate analysis and problem-solving.”34 They identify a process of detaching, noticing, and developing “here and now awareness.”35 Detachment, for example, allows a viewer to remember that a movie is really merely a “beam of light passing through a piece of moving celluloid projecting onto a screen with some sound and music that are designed to generate particular emotions.”36

In intelligence work, detachment involves stepping back from the full sensual experience of an issue to consider the actors involved, their motives, the larger context. Critical thinking as it is taught in the Intelligence Community attempts to make sense of the overall purpose or goal of a phenomenon, the points of view and assumptions of the actors involved, the implications of their acting in certain fashions, and other aspects of the larger context sur- rounding the issue.37 Questioning the available evidence and the inferences arising from it brings further detachment from the issue.

Noticing involves remaining open to both internal and external stimuli. Ultimately, situational information is conveyed from external sources through sight, sound, touch, smell, and taste. People can think consciously about these but they tend to process them using more autonomic brain structures, often without noticing they are doing so. The unease one feels about getting into a taxi or onto an elevator in an unfamiliar setting are examples of such input. In intelligence work this might be represented as a hunch about what an adversary will do. As Daniel Kahneman and Gary Klein note, in certain environments—where one can learn the cues—these intuitions may be quite accurate.38 However, in domains where one has not developed expertise, such intuitions can be inaccurate.39 The challenge is determining which of these situations one is in. This brings us back to the imperative of applying mindful detachment from the situation.

34 Ben Bryant and Jeanny Wildi, “Mindfulness,” Perspectives for Managers, no. 162 (September 2008), 1, URL: <http://www.imd.ch/research/publications/upload/PFM162_ LR_Bryant_Wildi.pdf>, accessed 14 January 2010. Cited hereafter as Bryant and Wildi, “Mindfulness.”

as Warren Fishbein and Gregory Treverton note, as “[mindful- ness] is the result of a never-ending effort to challenge expectations and to consider alternative possibilities.”43

“executives need to meditate in their own way, find ways to step back and reflect on their thoughts, actions, and motivations, and decide which ones are really supportive of their strategic agendas.”

Definitions for Making Sense of Sensemaking

Intelligence is a “specialized form of knowledge…[that] informs leaders, uniquely aiding their judgment and decision-making.” It is a type of knowledge created through organized activity that adds unique value to the policy- or decisionmaker’s deliberations. In the U.S. context, it makes sense of phenomena of interest to national leaders, warfighters, and those that directly and indirectly support them. Intelligence makes sense of phenomena related to the social behaviors of others. It reflects interest in what anyone will do to, and with, others that could affect the national interests of the United States as well as the prosperity and security of its citizens. Intelligence maintains an interest in external phenomena, such as epidemic or pandemic diseases, that impact U.S. national interests. In contrast to some popular portrayals, it really is not voyeuristic: what others do privately and alone is generally of little interest or value except as it affects how they relate to, and behave toward others. In other words, when private behaviors reveal either vulnerabilities or preferences, they may become of value to intelligence practitioners.

Sensemaking as it is used here refers to “a set of philosophical assumptions, substantive propositions, methodological framings, and methods.” As Mark Stefik notes (referring to work done with col- leagues Stuart Card and Peter Pirolli), it “is how we gain a necessary understanding of relevant parts of our world. Everyone does it.” Sensemaking goes beyond analysis, a disaggregative process, and also beyond synthesis, which meaningfully integrates factors relevant to an issue. It includes an interpretation of the results of that analysis and synthesis. It is sometimes referred to as an approach to creating situational awareness “in situations of uncertainty.” Gary Klein, Brian Moon, and Robert Hoffman consider the elements of sense- making and conclude that it “is a motivated, continuous effort to understand connections (which can be among people, places, and events) in order to anticipate their trajectories and act effectively.”

These authors conclude that “the phenomena of sensemaking remain ripe for further empirical investigation and [warn] that the common view of sensemaking might suffer from the tendency toward reductive explanation.” By reductive explanation Klein, Moon, and Hoffman refer to a tendency to overly simplify explanations—to “reduce” complex phenomena to simplistic models facilitating an apparently needed but shallow understanding.

Intelligence sensemaking encompasses the processes by which specialized knowledge about ambiguous, complex, and uncertain issues is created. This knowledge is generated by professionals who in this context become known as Intelligence Sensemakers.

These terms are used as defined here throughout this book.

Sensemaking: A Structure for an Intelligence Revolution

CHAPTER 1 Introduction

Where We Are

How people notice and make sense of phenomena are core issues in assessing intelligence successes and failures. Members of the U.S. Intelligence Community (IC) became adept at responding to certain sets of phenomena and “analyzing” their significance (not always correctly) during the Cold War. The paradigm was one of “hard, formalized and centralized processes, involving planned searches, scrupulously sticking with a cycle of gathering, analyzing, estimating and disseminating supposed enriched information.”

A growing professional literature by intelligence practitioners discusses these trends and their implications for advising and warning policymakers.58

The literature by practitioners embodies a trust that national intelligence producers can overcome the “inherent” enemies of intelligence to prevent strategic intelligence failure.59 The disparity between this approach and accepting the inevitability of intelligence failure has grown sharp enough to warrant the identification of separate camps or schools of “skeptics” and “meliorists.”60 As a leading skeptic, Richard Betts charitably plants the hopeful note that in ambiguous situations, “the intelligence officer may perform most usefully by not offering the answer sought by authorities but by forcing questions on them, acting as a Socratic agnostic.”61 However, he completes this thought by declaring, fatalistically, that most leaders will neither appreciate nor accept this approach.

Jervis asserts that policymakers and decisionmakers “need confidence and political support, and honest intelligence unfortunately often diminishes rather than increases these goods by pointing to ambiguities, uncertainties, and the costs and risks of policies.”63 The antagonism is exacerbated when policy is revealed to be flawed and to have ignored intelligence knowledge.

Jervis’ article on intelligence and policy relations, while it correctly notes the tensions arising from the differing roles of intelligence and policy, over-generalizes the homogeneity of the policy community. It is the author’s experience that outside of the highest levels, there are many levels of policymaking that both encourage and welcome the contributions of intelligence. Indeed, some parts of the policy community, beyond the Department of Defense (DoD) where it is the norm to do so, rely strongly on intelligence. Further, disagreements (which Jervis consistently labels conflict) are inherent and typically welcome in the process. Hard questions about the accuracy of judgments must be asked. If we are doomed to such “disagreements,” then it is a doom we should be eager to embrace.

The other perspective is that of the meliorists—those who feel intelligence processes can be improved. The present authors reside in this camp, preferring to believe that the application of well-informed, mindful exper- tise, as developed in the present work, can bring positive and substantive value to the fulfillment of the IC’s obligations.

intense attention within and outside the IC has focused on the means by which pertinent phenomena are to be under- stood. So-called intelligence “analytic” methods are being unshelved or developed and taught to novice and experienced intelligence professionals alike. However, less fully considered are the appropriateness and validity of these methods as well as the underlying assumptions they enshrine. Even less well understood is what happens when specific methods are combined and how those combinations may be made. Several ways exist to characterize these methods in terms of their purpose. However, to date, there is no readily avail- able way to characterize methodological appropriateness or effectiveness, nor the limitations of individual methods. We also lack sound guidance on the use of combined methodologies, despite some recent, promising literature.

Before these deficiencies can be remedied, however, we need to reframe the way in which intelligence is created. Such a re-conceptualization involves critically examining what intelligence practitioners actually do, and why. The examination demands methodological rigor with particular attention to how we might ensure the validity of our approach to the work of intelligence. If the examination indicates that the existing paradigm for intelligence creation is inadequate, then a revolutionary shift in IC habits will be justified.

the intelligence-creation process remains largely a product of Cold War-era institutions and thinking, using the same cognitive frameworks that have been employed for decades. Some argue that what worked in the past is still appropriate. However, as numerous executive and legislative reports confirm, intelligence targets have in fact evolved: adversaries’ goals have changed, and their methods have evolved, even if the threats they pose seem very familiar. In sum, the old national intelligence paradigm is woefully out of date.

Intelligence issues are not the same as the issues framed separately by policymakers. To partner successfully with policymakers, intelligence professionals must consider issues from multiple perspectives. This is the role of sensemaking. Yes, the sensemaking process includes “analysis” or attacking issues by “taking them apart.” The process also includes synthesis—putting the pieces back together; interpretation—making sense of what the evidence means; and communication—sharing the findings with interested consumers. Essential to these processes is another, that of sound planning or “design.”69 While it could be said that this is what intelligence analysts do, such a statement is epistemologically false. Strictly speaking, intelligence analysts only take issues apart.

Why should we be concerned with a matter of semantics? In short, because the terms we use within the Intelligence Community shape and reflect our practice. If we are to change the culture of intelligence, and be changed by it, our practice of intelligence must also change. New language encourages a new paradigm, and paradigm shifts are revolutionary, not evolutionary.

Kent’s Imperative70

When much of the tradecraft of intelligence was put in place sixty or more years ago, the dominant framework was that of the historian as scientist. The primary intellectual framework for Cold War intelligence at the national level grew from Sherman Kent’s seminal work, Strategic Intelligence for Ameri- can World Policy.71 Kent’s legacy remains active in the National Intelligence Council and the Community at large.72 Although decision theory and other social science thinking began to influence the creation of intelligence in the 1960s and 1970s, these inputs languished until the reform efforts of recent years. More recently, advances in cognitive science, anthropology, decision theory, knowledge theory, and methods and operations research have brought us to the brink of informed, mindful intelligence sensemaking.

Sherman Kent argues that in creating predictive intelligence about its adversaries “the United States should know two things. These are: (1)…strategic stature, (2)…specific vulnerabilities.”73 These objectives focus on capabilities and draw heavily from the “descriptive and reportorial elements” of intelligence for basic data.74

The Failure of an Analytic Paradigm…

Kent’s preference for gathering and disaggregating more and more data to find answers fails today in the face of information volume, velocity, and volatility. Marshaling and disaggregating ever more data does not equate to contextual understanding. Further, the assumption that larger pipes to collect data and larger arrays to store it will then allow us to uncover the hid- den, clarifying nuggets, is misleading.

Consider what actually happens when intelligence professionals look for an answer to a problem or question. They do not just disaggregate data. Instead, people inquisitively (and selectively) interpret patterns by comparing observed, newly emergent phenomena to what they already “understand.” They make sense of phenomena by asking questions; foraging for information; marshaling it into evidence; analyzing, synthesizing, and interpreting that evidence, and communicating their evidence-based understanding of issues to others. Something makes sense because, based on their experience, its pattern is similar to something they previously have seen and that made sense to them. They may even employ a new, self-generated pattern based on previously learned and remembered patterns if they do not get a good match to an ostensible pattern.78

In other words, one must be able to convincingly correlate ostensible patterns to the data or information for which one is attempting to “make sense.” This is not always possible, especially if the phenomenon or issue is broad, novel, or poorly understood; that is, not easily subject to confirmation by universal human sensory apparatii.

For practitioners to create intelligence knowledge—even with an acknowledged degree of uncertainty—therefore requires much more than mere “analysis.” One alternative framework is embodied in the concept of sensemaking. Sensemaking begins with a mindful planning and questioning that leads to foraging for answers. It is true that along the way the resulting relevant assemblage of information—or evidence—is disaggregated into its constituent elements. However, it is also synthesized or combined to form a theory or systematic interpretation of the issue that subsequently must be explained, and convincingly. Throughout sensemaking, a continuous assessment is demanded of both the processes by which the intelligence is created and of the intelligence knowledge itself.81 Mindfulness—as discussed above in the Preface—coupled with a critical thinking-based approach, pro- vide the vigilance, awareness, and self-reflection needed to assess an issue rigorously. This is a central point: Intelligence does not exist in a vacuum. It must contribute to the understanding of an issue by informing the concerned parties of a perspective or information they did not already know. Ultimately, if no one is concerned about the knowledge sensemakers create, it is not intelligence.

IARPA, the Intelligence Advanced Research Projects Activity, employs a definition of sensemaking that is complementary to that developed here.86They propose that sensemaking is “a core human cognitive ability [that] underlies intelligence analysts’ ability to recognize and explain relationships among sparse and ambiguous data.”87 This book accepts that perspective and develops the psychological, behavioral, and social levels of sensemaking as they apply to intelligence creation. By contrast, IARPA’s own program on sensemaking seeks to build upon advances in computational cognitive neuroscience that reveal “the underlying neuro-cognitive mechanisms of sensemaking.”88

IARPA, BAA-10-04, 4. On the emerging discipline of cognitive neuroscience, see The 4th Computational Cognitive Neuroscience Conference, URL: <http://ccnconference.org/>, accessed 7 June 2010.

As characterized by Peter Pirolli, the process of sensemaking is highly iterative, involving a foraging loop and a sensemaking loop.89 In the former the sensemaker seeks information, “searching and filtering it,” while in the latter an iteratively developed mental model or schema is developed “that best fits the evidence.”90 While the overall flow is “from raw information to reportable results,” top-down and bottom-up processes act in concert to reframe issues: information either does or does not fit the hypotheses being considered; hypotheses are refuted or refined, and the larger issue and its context are also reframed, as it comes to be more thoroughly understood.91 How this can occur within the context of intelligence creation is developed in the following chapters.

To sum up, this book argues that intelligence built around a model of disaggregation as it originated with and developed under Kent, and is still largely practiced today, is at best insufficient. A paradigm based on the concept of sensemaking and employing insights from other knowledge-creation disciplines provides a more appropriate means of skillfully creating intelligence. This book draws a general picture of 21st Century intelligence under a revolutionary paradigm, although it does not explain how all its contours can be fleshed out. We believe that intelligence could be a true profession and moving toward that goal is our desire.

CHAPTER 2
The Failure of “Normal Intelligence”

Intelligence Challenges

Our understanding of everyday phenomena is confounded by every- day strategies employed to mitigate cognitive dissonance, a stressful condition arising when reality clashes with one’s perceptions. Two broad strategies, selective exposure and selective perception, can prevent dissonance, but at the expense of sound, mindful reasoning. Through the former, we limit the evidence to that which agrees with or otherwise supports our positions; in the latter, we interpret what we experience in terms of our pre-existing world-view.

the differences between “intelligence error” and “intelligence failure.” Anthropologist Rob Johnston defines intelligence error in terms of “factual inaccuracies in analysis resulting from poor or missing data.”98 Conversely, intelligence failures are “systemic organizational surprise resulting from incorrect, missing, discarded, or inadequate hypotheses.”99 Thus, the term “failure of imagination” makes sense as a synonym for intelligence failure, where members of an intelligence creating organization fail to imagine in advance the essential outlines of an incident that subsequently occurs.

The etymology of “imagination”—generating images—reminds us of the contemporary critic of Kent, Willmoore Kendall, who suggested that the job of national intelligence is to communicate with decisionmakers in a “holistic” way so as to generate the “pictures [mental models] that they have in their heads of the world to which their decisions relate.”112

Considering Standard Models

Intelligence failures occur as practitioners employ a “standard model”113 of intelligence: In it, analysts “separate something into its constituent elements114 so as to find out their nature, proportion, function, relationship, etc.115 and “produce reports” based on “collected” information and data. There is a definitional presumption that disaggregation will lead to answers. However, this model incompletely describes what the intelligence professional does and its underlying presumption about finding answers may be false.

One problem is that in Kent’s data-based analytic framework, analysts need to have all the data available so they can be marshaled into a coherent account. “Dots”—if they exist at all—can be connected in more than one way.116 In foresight it is difficult at best to determine which combination and order is valid. Such determinations can be further complicated by the fact that adversaries may change their actions if they suspect we have arrived at a certain conclusion.

An additional problem is that with an increased number of signals there is also an increased level of noise. Which signals, which facts, or which inferences the intelligence professional should consider valid becomes a very important consideration. At best, warning of a pending incident is a problem of assembling and making sense of the details of a specific incident in advance. However, many intelligence problems inherently defy such linear characterization. They are in fact “wicked” problems—a formal designation of a complex issue with myriad linkages. We turn next to an exploration of problem types to see how their nature directs our making sense of them.

Types of Problems

In order to understand “wicked problems,” one must first understand the nature of “tame problems.”

Tame Problems

In a tame problem there is general agreement as to what or who an adversary is, what the “battlefield area” is, and what an attack is. Such problems, while difficult, exhibit specific characteristics: They are clearly defined and it is obvious when they are solved. Solutions to these problems arise from a limited set of alternatives that can be tested; the correct solution can be objectively assessed. Finally, solving one tame problem can facilitate creating valid solutions to other, similar tame problems.

It is important to note that analysis protocols for tame problems con- tain little or no room for “emergent” properties. One may not know that the analytic protocol is insufficient until the puzzle has been incorrectly defined, characterized, and solved, if it is in fact solvable. One arrives at one solution that at first appears to have resolved the issue, but in fact, the issue reemerges elsewhere. For example, the implementation of a linear, intelligence-driven solution to crack down on insurgents and their improvised explosive devices (IEDs) in one area may lead to an emergence of IED-caused explosions some- where else. In such a case, the application of “tame problem protocols” may in fact have been inappropriate—the problem is in fact not tame.

Admittedly, many 21st Century intelligence issues remain puzzles or tame problems. This occurs when the events surrounding the issues have already occurred, appropriate questions are readily identifiable, and answers exist, even if they are difficult to find.

Wicked Problems

However, seen in a larger context, are such puzzles truly tame? Or are they components—as Russell Ackoff suggests—of something larger: a mystery in Treverton’s terms, or a “mess” according to Ackoff.119 Treverton’s intelligence mysteries defy easy definition. They belong to a class of problems defined by social researchers Horst Rittel and Melvin Webber as “Wicked Problems.”

The adaptive nature of adversaries makes seemingly tame puzzles wicked, moving them into the realm of “unknown unknowables.”

By definition, wicked problems are “incomplete, contradictory, and changing.”121 They do not have single answers and in fact, are never truly answered. In the context of intelligence, the sensemaker may never realize a problem has been resolved. This is because “the solution of one of its aspects may reveal or create another, even more complex problem.”122 The emergent complexity of the problem itself, its adaptive nature, efforts at denial and deception by adversarial actors, as well as cognitive frailties on the part of sensemakers, compound the problem, confounding sensemaking, leading in some cases to disastrous courses of action or consequences.

A Wicked Look at Wicked Problems in Intelligence

Characterizing intelligence issues in terms of their problem type— admittedly somewhat vaguely (in keeping with their nature)—reveals just how prevalent wicked problems are within the domains of intelligence.

Wicked problems have no definite formulation. To Rittel and Webber, “the process of solving the problem is identical with the process of understanding its nature, because there are no criteria for sufficient understanding.”124 In other words, making sense of problems deemed sufficiently complex so as to be considered wicked is equivalent to characterizing them in the first place; the description encompasses all possible solutions.

one wicked problem example could be “how best to stem the growth of terrorism in the Middle East.” An assumption in considering this problem is that if intelligence professionals can understand what motivates people to become terrorists in the first place, intervention might be possible. Mitigating the creation of new terrorists could aid in reducing both their numbers and by extension, their attacks. Do people become terrorists because they are dissatisfied with what they see as contradictions and hypocrisies in their lives? If so, what then are the specific roots of dissatisfaction and contradiction? One commonly cited is a lack of economic opportunity for males within societies. In that light, Rittel and Webber ask, “where within the…system does the real problem lie? Is it deficiency of the national and regional economies, or is it deficiencies of cognitive and occupational skills within the labor force?”125 The possible solutions to this problem extend the domain of questions, spreading ever outward.126

as Nicholas Taleb notes, [Our] track record in predicting those events is dismal; yet by some mechanism called the hindsight bias we think that we understand them. We have a bad habit of finding “laws” in history (by fitting stories to events and detecting false patterns); we are drivers looking through the rear view mirror while convinced we are looking ahead.

Even with the addressal of major assumptions, there remain additional underlying factors that do not get questioned—almost an endless succession of assumptions that must be peeled off the problem much as one peels layers off an onion. There is an added complication that individual layers are not sequential and in fact may lead (to continue the analogy) to other onions or other vegetables, or even fruit. In intelligence, such assumptions are themselves a mess: a complex system of interrelated experience, knowledge, and even ignorance that affects reasoning at multiple levels sequentially and simultaneously.

Wicked problems have no clear end-point.

With tame and well-structured problems one knows when the solution is reached. In wicked problems this is not so, as Rittel and Webber make clear:

There are no criteria for sufficient understanding and because there are no ends to the causal chains that link interacting open systems, the would-be planner can always try to do better. Some additional investment of effort might increase the chances of finding a better solution.

We have a bad habit of finding “laws” in history (by fitting stories to events and detecting false patterns); we are drivers looking through the rear view mirror while convinced we are looking ahead.127

This is not a new consideration. Writing in the 1930s, John Dewey observed that “the ‘settlement’ of a particular situation by a particular inquiry is no guarantee that that settled conclusion will always remain settled. The attainment of settled beliefs is a progressive matter; there is no belief so set- tled as not to be exposed to further inquiry.”131 Intelligence sensemakers routinely confront this challenge. Reports and assessments often update or revise previous conclusions. Often the previous reporting is consulted before the new report is written so that the author can determine the preexisting point of view on the issue. Such consultations at best determine whether the current situation deviates from the norm. Unfortunately, sometimes such consultations lead to the rejection of the new evidence, opening the way to intelligence errors and failures. One goal of an adversary’s denial and decep- tion activities is to facilitate rejection of the novel deviation. It was in this way that the possibility of nuclear missiles deployed to Cuba was rejected amid outlandish noise during the summer of 1962, and military exercises along the Suez Canal lulled Israel into a sense of creeping normalcy prior to October 1973.

Solutions to problems may be implemented for “considerations that are external to the problem” itself: problem solvers “run out of time, or money, or patience.”132 In intelligence, sensemakers may only be able to work for a given time on a problem before they have to issue their report. Changes in funding may mean that an effort to understand a phenomenon has to be dis- continued. The practicalities of resource limitations force changes in sense- makers’ foci. However, this does not mean that the problem does not continue to exist and perhaps, threaten. Rather, an answer has been developed to a dis- tilled problem, communicated, and now other things must be done.

Solutions to wicked problems are at best good or bad.

Some problems have true or false, yes or no answers. These are not wicked problems. Wicked problems have no such answers. Differing perspectives applied by different problem solvers, differing sets of assumptions, and differing sources of evidence are several of the factors that lead separate groups to come to different judgments about wicked problems. The impossibility of exhaustively considering all the factors and solutions of the problem also contributes to a multiplicity of solutions.

Focusing on the economics sur- rounding the growth of terrorism leads to different proposed solutions than does focusing on the demographics involved in the issue. Religious consider- ations or broader cultural considerations also create different solutions. Each of these perspectives in turn optimizes multiple points of view with differing, good and bad solutions. Overlap is possible and even desired. Good solutions encompass multiple domains.

Tests of solutions to wicked problems may not demonstrate their validity and may provoke undesired consequences. Implemented solutions to wicked problems “generate waves of consequences over an extended— virtually an unbounded—period of time.”

Further, these consequences may themselves prove so undesirable as to negate any and all benefits of the original decision—and this cannot be determined in advance. Thus an intelligence-based decision to invade a country’s possessions may create circum- stances that offset any gains initially won, as the Argentineans discovered in 1982 when they—unwisely in retrospect—seized the British-owned Falkland Islands.

Implementing solutions to wicked problems can change the problem. In intelligence problems, real solutions cannot be practiced; there are no “dry runs.” True, sensemakers and their policy-making customers can (and should) consider what might happen or the “implications” of the decisions or solutions of the problem at hand. Doing so might increase the likelihood that the decision selected is the best or the less bad of a set of bad alternatives.

Modeling the situation is one common means of assessing the implications of a potential action. However, models must by their very nature limit the factors considered. This raises the question of how one might know in advance if the eliminated factors are in fact significant.

As Rittel and Webber note, “every attempt to reverse a decision or to correct for…undesired consequences poses another set of wicked problems,”135 as sensemakers and planners involved in the U.S.- led “war on terror” have discovered. Actions, once taken, may mitigate the threat, or may not, which leads to the next facet of wicked problems.

Sensemakers can never know if they have determined all the solutions to wicked problems. They can expect, however, that they almost certainly have not determined all the solutions. In developing the range of alternatives within scenarios, two goals predominate: mutual exclusivity and collective exhaustion.

In other words, each alternative must preclude the simultaneous possibility of the others, and the entire set of known alter- natives must be considered. In practical terms, this is much more difficult to achieve than it sounds. Intellectual frameworks and so-called “biases” such as vividness, anchoring, confirmation, and others combine to prevent people from being able to consider all the alternatives. Adding to this is the fact that issues evolve in unpredictable ways. All the solutions simply are not knowable because they lie in the future

Each wicked problem is unique. While it is true that common elements can be found between problems, there remain additional and unique properties of “overriding importance.”136 In other words, wicked problems cannot be characterized into “classes…in the sense that principles of solu- tion can be developed to fit all members of a class.”

Every wicked problem is embodied in another one. Rittel and Webber describe problems as

[discrepancies] between the state of affairs as it is and the state as it ought to be. The process of resolving the problem starts with the search for causal explanation of the discrepancy. Removal of that cause poses another problem of which the original problem is a “symptom.” In turn, it can be considered the symptom of still another, “higher level” problem.139

What policies and actions, for example, are necessary to “fix intelligence?” Answering this involves asking what is causing intelligence to fail. One place to start is to consider why analysts are wrong and how intelligence errors lead to intelligence failure.140 Yet such considerations lead one to consider how consumers may ignore intelligence, and how adversaries may in fact be “more capable” than expected. These in turn lead to what Jeffrey Cooper considers “analytic pathologies” that decrement both individual and corporate efforts to make sense of issues (table 2). Each of Cooper’s specific pathologies is furthermore at least partially embodied in the others, giving rise to error-producing systems.141 For example, Cooper argues that intelligence professionals’ pathological focus on both “the ‘dots’ analogy and the model of ‘evidence-based’ analysis…understate significantly the need for imagination and curiosity.”142 Related to this is what he calls the myth of “Scientific Methodology.”

Analysis is not [hard] science and is not about proof. Rather it is about discovery.143 These are embodied in the protocols he refers to as the flawed “Tradecraft Culture,”—a guild system of potential sensemakers and their historically unchanging ways of working.144

How wicked problems are resolved is determined by the means and methods used to make sense of them. In other words, how problems are perceived determines the kinds of solutions that are proposed. Point of view becomes essential in defining what a problem is and how it is to be resolved. Complex, wicked problems (as well as many “tame” ones) cannot be defined from one point of view.

How Are Wicked Problems Disruptive?
Disruption, as developed by Clayton Christensen, emerges from technologies that, while they may under-perform established tech- nologies, open new markets and change the ways people do things. Enlarging the definition, disruptive intelligence problems threaten to change the way people interact. They proffer or impose new para- digms—both “good” and “bad”—for non-governments and govern- ments alike. The disruption occurs because the incumbent is doing the most rational thing it can do given its circumstances. Doing the right thing generates the opportunity for disruption.

Clayton Christensen, The Innovator’s Dilemma (Cambridge, MA: Harvard University Press, 1997).

Sensemakers have no right to be wrong.

[T]he primary function of the Central Intelligence Agency is to seek the truth regarding what is going on abroad and be able to report that truth without fear or favor. In other words, the CIA at its best is the one place in Washington that a President can turn to for an unvarnished truthful answer to a delicate policy problem.148

Will Pitt, “Interview: 27-Year CIA Veteran,” Truthout, 26 June 2003,

This aphorism may have validity in the domain of tame problems where the truth is known or knowable. However, it has much less (if any) validity in the world of wicked problems where many truths can coexist, depending on the point of view expressed, the context can be simultaneously true and contradictory, and may in fact be unknowable.

The goal of assessing wicked problems may be to “improve some characteristics of the world where people live. Planners are liable for the con- sequences of the actions they generate; the effects can matter a great deal to those people that are touched by those actions.”

An Intelligence Example: Pandemics as Wicked Problems

One of the threats faced by intelligence organizations and their professionals is that of an emergent global pandemic. What kind of a threat is a pandemic? Is it a tame or wicked problem, or something in between? Such considerations matter because they define what approaches are suitable for alleviating or mitigating the threats to national security that pandemics pose.

When the stakes are the lives of many people, sensemakers and policymakers who miscalculate or underestimate or are otherwise wrong about a pandemic and its impact on their countries or region can expect vilification at best. A fear of such vilification from the public and the media might contribute to the situation whereby pandemic-tracking organizations such as the U.N. World Health Organization or the CDC overestimate the severity and threat posed by a pandemic such as the 2009-2010 Swine Flu pandemic.158 For intelligence professionals this phenomenon is not unknown. Common wisdom among intelligence sensemakers is that it is far better to warn and be mistaken (and nothing hap- pens) than to not warn and be mistaken (something happens).

Complexity

Rittel and Webber’s notions of wicked problems can also be characterized through the lens of complexity theory. As developed by Jonathan Rosenhead, “systems of interest to complexity theory, under certain conditions, perform in regular, predictable ways; under other conditions, they exhibit behaviour [sic] in which regularity and predictability is lost.”159 This is exceptionally true of intelligence. Certain kinds of issues, including the interpretable indications of a build-up to armed conflict, can be extremely predictable.

However, in other situations, there may be a number of unknowable, unpredictable, and unanticipatable outcomes. Thus, reliable prognostication is simply not possible.160 For instance, if a coalition of nations removes an oligarch in another nation from power, the specific outcomes of that action cannot be known in foresight. While alternative outcomes can be modeled and simulated, they remain valuable only as discussion points: there is no guarantee in advance that they have captured the reality that will occur. Modeling and simulation are feasible because complexity science shows that the “indeterminate meanderings of these systems, plotted over time, show there is pattern in the movements…the pattern stays within a pattern, a family of trajectories.”161 Unfortunately, because intelligence must address the “real” world, rather than its modeled or simulated semblance, events often are unique and therefore their patterns also are unique.

Thus, there exists an inability to guarantee a future reality; even probabilities may be suspect.

Analysis as here defined is insufficient to address complexity. Disaggregation simply does not reveal future alternatives. That this is so becomes obvious if one finds that it is the emergence of unique and novel behaviors arising from different and minutely differing initial conditions that characterize many 21st Century intelligence issues. In these circumstances, the whole of an issue is greater than its parts. But, in analysis, the issue is by definition and practice the sum of its parts.

Given these complex issues, the concept of “analysis” is simply insufficient for sensemaking. Instead, greater conceptual accuracy and precision of terminology is required.

To achieve the needed accuracy and precision requires more than semantic invention. It also demands that underlying concepts, known as assumptions or premises, be identified and accounted for. Therefore, in developing the case for considering new paradigms for intelligence, certain terms require explicit (re)definition.

Implications of Complexity

Viewed from a larger context, complexity stymies the entire “standard model” of intelligence creation. With regard to Kent’s concept of knowledge, or how intelligence is created, complexity—as viewed from the framework of wicked problems—confounds the consideration and mitigation of such problems. Kent’s model of predictive and specific warning seems more miss than hit. Complexity further confounds the collaborative processes contained within Kent’s notions of Activity and Organization, by which intelligence pro- fessionals are tasked to interact. How does the intelligence professional know in advance whose imagination will be most helpful in making sense of the problem at hand in time to prevent a catastrophe or even imagine one?

Given the challenges of both tame and wicked 21st Century intelligence problems and their inherent complexity, what are intelligence professionals to do? One avenue open to them, and presented below, is the development and validation of methods of reasoning about key evidence.

CHAPTER 3
From Normal to Revolutionary Intelligence

Evidence-Based Intelligence Creation

Intelligence sensemakers use more than context-less data and information. They employ assemblages of evidence—at a minimum, collections of data and information determined through marshaling to be relevant to the issue under consideration—in other words, contextualized to specific issues. Evidence reveals alternative explanations through pattern-primed, induced inferences about what is going to happen or what has happened already in the past.

While the inferences are typically uncertain, they do justify beliefs about phenomena. Justifying beliefs (or theories or hypotheses) presents a case for their accuracy but does not guarantee ground (or any other) “truth.” Rather, as Peter Kosso notes, justifying beliefs is “about meeting the standards of evidence and reason [to] indicate [the] likelihood of accuracy.”168 Sensemakers go further and seek to demonstrate that the knowledge of tendencies they establish provides for “a correlation between being more justified and being true.”

It is arguable whether greater evidentiary justification demonstrates the likelihood of a strongly accurate correlation with truth. As Kosso makes clear, even with abundant justification, there is no certainty of truth.

As figure 1 illustrates, intelligence sensemaking is conducted in ser- vice of a number of goals, including describing states of affairs, explaining phenomena, interpreting events and actions, and estimating the likelihood and impact of a foe’s future actions. As intelligence professionals move from describing events, explaining patterns of behavior, and grasping underlying factors and intentions, ever more justification of beliefs about the phenomena under scrutiny is required. Yet, as intelligence professionals attempt to apply greater scrutiny in this sequence, their capability to do so decreases as they face greater ambiguity.

intelligence evidence, while it may appear to remain “haphazard,” is the result of systematic foraging, gathering and interpretation. The past tells intelligence practitioners what to look for in the future. This poses dangers when those indicators are no longer (if indeed they ever were) valid.

If using the past to gain wisdom about what the future holds is not feasible, what about studying the past to avoid folly? Tversky and Kahneman’s work on availability leads one to suspect (as Fischoff also notes) that focusing on misfortunes “disproportionately enhance[s] their perceived frequency.”181 Another challenge to considering the past as a teacher of what not to do is that one may not properly understand the problem.

With the intention to improve evidence-based intelligence creation, recent legislation “reforming” intelligence goes so far as to require that “alternative analysis” be conducted.182 The IC, at least through its schools, interprets this to mean that multiple hypotheses be considered. The relevant act mentions “red teaming: a means by which another group of intelligence professionals consider alternative explanations for an issue being scrutinized.183 The legislation leaves unexamined the question of whether the criteria for sensemaking will be met in examining tame problems and especially wicked problems arising from consideration of adversarial intentions.

If, for example, one estimates that a particular country whose policies one’s own government generally opposes will develop both a long-range missile capability and a nuclear weapons capability and then marry the two together, one has to have already imagined, in the context of the target country’s political and technological environment, what a long-range missile capability is, what a nuclear weapon is, what a weapon of mass destruction is, and a strong sense of the will to combine these threat elements. Policymakers may challenge the target country’s actions, making their leaders more adversarial. Thus, at a minimum, intelligence and policy create the future—or a version of it. Done poorly, this can lead to unintended and dangerous implications.

In a tense bilateral or even multilateral environment, rhetoric and actions can precipitate events so as to create a future consistent with those pattern-derived conclusions, driving the target country to produce the weapons. Each side then blames the other nation’s government for having “caused” the crisis.

when interpretations of the evidence lead to coherent alternative inferential conclusions, then the existing or accepted theories require changing. What must not happen is to reinterpret the evidence to support the prevailing pre- existing theory,

People are often unwilling to abandon their cherished positions. This occurs in part because they are not dispassionate as they reason about evidence. In other words, positions are influenced by various worldviews or cognitive approaches, particularly selective perception and selective exposure. These combine to steer how people recognize issues, the phenomena that comprise them, and how they go about making sense of them.184 These influences or theoretical frameworks shape the patterns people use to interpret new phenomena. The benefit is that these frameworks make people smart and do so quickly.

In an information-rich environment brought about by technical collection, intelligence professionals can select inappropriate patterns to use in making sense of new phenomena. In intelligence work, if such patterns conspire to affect the search for and the selection of the evidence sensemakers use, and that they and their consumers then accept, selective perception and selective exposure set the stage for intelligence error and failure.

Evidence always requires a context, and as the missile example illustrates, there may be more than one explanatory context that makes sense. In intelligence, “evidence is [particularly] rarely self-sufficient in information or credibility.”

unless the correct context is known, evidence—if its constituent information can even meet that threshold— is subject to many different interpretations. Without context the person assessing the evidence has no way of knowing which interpretation is correct. Multiple contexts further confound the situation, for different contexts often lead to alternative conclusions as was illustrated in the missile development scenario just described. Finally, as Hampson’s essay reveals, the political context of the policymaker may skew the actual context conveyed by intelligence.

Considering the Normal

The process described in the preceding section can be thought of as “normal intelligence.” As conceived by Thomas Kuhn, “normal” refers to “the relatively routine work…within a paradigm, slowly accumulating detail in accord with established broad theory, not actually challenging or attempting to test the underlying assumptions of that theory.”195 We can thus see that “normal intelligence” is an activity of expanding knowledge in which most intelligence professionals engage and which incrementally increases knowledge about targeted phenomena.

The perceived and recalled successes of the past contribute to the repeat use of unvalidated tradecraft. The paradigm presumes state-level adversaries—eventually with mutually destructive capabilities.

As used in this context, “normal intelligence” is to “intelligence” as Thomas Kuhn’s “normal science” is to “science.” In both domains newly created knowledge incrementally adds to an increasingly established paradigm; new knowledge does not easily redefine the paradigm.

Normal science or normal intelligence does not seek to revise significantly the paradigm by which new phenomena are known and understood. This may be seen in the way new intelligence personnel adopt existing job accounts. A common practice involves their reviewing previous reporting on the account, with a tendency for new reporting to stay within the conceptual boundaries of what has gone before. Knowledge increases only incrementally.

Normal paradigms prevail until previously unnoticed and unnoticeable discrepancies create sufficient inconsistencies in explaining and understanding phenomena so as to cause errors that cannot be ignored.

In the cultural environment of human interaction, the new perceptions of reality can be enough to force a reconsideration of the old. In social scientific terms, a new paradigm not only explains the new, it does better at explaining the old.

The existence of particular intelligence errors does not necessarily indicate a paradigm has changed. However, repeated intelligence errors do. As is the case with science, small errors in adequately characterizing phenomena lead to the emergence of “corrective constants.”

The state-as-adversary paradigm for intelligence creation is obsolete. Two decades now separate the interpretable intelligence context from that of the Cold War: the adversaries and issues are now strikingly different.202 The power of the Soviet Union waned dramatically after 1990 as that of China increased. But even more central to the intelligence context, novel phenom- ena also appeared that were non-state based: emerging non-state actors posed new challenges by threatening traditional state structures.

The anomalies these new phenomena have created illustrate how and why normal intelligence is no longer adequate: it could no longer characterize these phenomena within the threat and opportunity framework of strategic intelligence. The “normal” means by which error is explained remain inadequate. As documented in the various Congressional and independent commission reports, intelligence no longer adequately describes, explains, or predicts with respect to the phenomena its consumers need to understand. Thus, intelligence change is necessary—revolutionary change.

Paradigm Shift

Revolutions in science, politics, and military affairs occur because crises reveal the insufficiency of the reigning paradigm.

periodic revolutions change how phenomena are perceived and understood.204 Crises are a precursor of such paradigm shifts.

here are repeated attempts to impose methods of “[social] scientific study…to analysis of complex ongoing situations and estimates of likely future events.”206 What is lacking is any sort of a systematic approach across the Intelligence Community. As long-time practitioner and observer Jack Davis noted a decade ago, no corporate standards for how intelligence is created, including the methods employed, exist.207 Although sound practice does not ensure that intelligence assessments will be correct, its absence, by definition, contributes to flawed conclusions.

In short, U.S. intelligence professionals operate in an environment similar to an unfolding Kuhnian revolution: the epistemology of normal intelligence is insufficient and new knowledge is needed. The recent failures highlight the necessity for change, as does the graying of the intelligence sensemaking workforce—new people faced with new and emerging issues should be comfortable with finding new ways to systematize their work.

Not all “old school” intelligence practices are without continuing value. Several significant state-level adversaries remain as threats to the security of the American nation although they too are challenged by the new non-state actors and issues that populate the paradigm of the new intelligence—something that compounds any estimate of how they are likely to engage the United States. Further, in many circumstances and in dealing with certain issues, the tacit expertise of highly experienced intelligence professionals is appropriately tapped for “recognition-primed” sensemaking.209 These “old hands” possess both current knowledge and a highly evolved skill set. Years of innovative and critical thinking mean they are skilled in looking at issues from a variety of perspectives and have the wisdom of deep context.

The challenges involve knowing when such expertise is valuable and needed in the first place, and encouraging the intelligence enterprise to develop and retain the cognitive and organizational flexibility that such thinking requires.

Indeed, a part of successful and revolutionized intelligence work involves gleaning new meanings from old patterns that have remained hidden to those who have stopped short of sensemaking. One challenge is that the “fresh” eyes lack the knowledge of potentially relevant patterns while the “old” eyes cannot see things as new. Each lacks the other’s strength. Experience acquired by newer professionals who engage in the practice of traditional “analysis” jaundices their once-fresh viewpoints even as they start to acquire the relevant and necessary experience.

One solution may be to adopt a model of core competencies broken out according to task analyses of existing intelligence missions and functions. Such a model identifies what is needed and has been at least partially imple- mented in the IC’s Analytic Resource Catalog developed during the tenure of former Director of Central Intelligence George Tenet.

Mark M. Lowenthal, “Foreword,” in Moore, Critical Thinking, xi.

Rewarding the successful use of some of the most important competencies may also encourage their retention in the catalog. Among these are curiosity, perseverance, and pattern recognition.

Simply put, intelligence practitioners create knowledge to support their customers. As used here, intelligence practitioners are presumed to be contributors to government plans and policies at a variety of levels where they have the opportunity to share broad strategic perspectives with national leaders as well as ensure that deployed warfighters have at hand the fruits of technical collection and marshaling of tactical data.

Finally, it should be noted that intelligence Knowledge is only one component of a strategic, operational, and tactical intelligence triumvirate.213 Activity and Organization are the other two. It is the author’s belief that Activity and Organization also are in need of new paradigms. However, such a discussion hinges on what intelligence Knowledge is and how it is created—in short, the sensemaking involved. Insofar as Activity describes how the precursors of intelligence are hunted, gathered, made sense of, and transformed into knowledge, it is considered here. However, the uses of intelligence (also an activity) and how intelligence professionals are grouped, led, and managed to act and create knowledge—the realm of Organization—lie beyond the scope of this book.

CHAPTER 4:
The Shape of Intelligence Sensemaking

Intelligence Sensemaking involves a number of overlapping high- level activities. First, intelligence professionals engage in planning or design and then hunt for and gather the materials they require in order to under- stand issues, answer questions, or explore new ideas. They can be externally motivated by the needs of a customer or they can be self-motivated as a result of an observation, or both. Second, these professionals disaggregate and then reassemble relevant information, trying to determine what it means.

At every stage in their work they assess critically their processes and results, seeking to validate both how they are engaged and the outcomes of their engagements. These overlapping activities can be characterized as Planning, Foraging, Marshaling, Understanding, and Communication. They are supported by Questioning and Assessing.

Planning for Tame and Wicked Intelligence Problems

Making sense of either tame or wicked problems is predicated upon planning. Plans, according to Gary Klein, are “prescriptions or roadmaps for procedures that can be followed to reach some goal, with perhaps some modification based on monitoring outcomes.”214 Creating plans requires “choosing and organizing courses of action on the basis of assumptions about what will happen in the future.”215 Known as planning, this process characterizes the “contingencies and interdependencies such as actions that must occur first as a precondition for later actions.”216 When we add the concept that practitioners—through critical thinking—also engage in reflective thinking and learning, both singularly and collaboratively, we may similarly label this process “the art of intelligence design.”

With tame problems, where answers and solutions can be anticipated, algorithms can calculate actionable probabilities and repeatedly make sense of the problem.217 The design of useful algorithms may be complex and they operate well only in finite and specific environments. How, on the other hand does one plan or design for wicked problems? One answer is to re-imagine the wicked problem as a tame one. However, the repeated occurrence of “unintended consequences” in past scenarios suggests that this is not a good option. Disaggregating what are assumed to be tame problems into their component parts, regardless of the actual problem type, often proves inadequate, as unintended and unforeseen consequences make clear.

Yet planning must occur regardless of problem type.

Klein considers—along with Rittel and Webber—that planning is an emer- gent process: Goals are clarified and revised as understanding of the problem grows.218 He notes,

Goals can be dynamic and can change completely as a function of changing circumstances. Goals can conflict with other goals in ways we can’t anticipate or resolve in advance. Goals can carry implications we can’t perceive or anticipate until events transpire.219

In terms of intelligence creation, this means that larger, strategic goals can—and perhaps must—emerge as sense is made of the problems under scrutiny. Thus, a tasking from an intelligence consumer changes as mindful sense is made of the tasking itself, of the resources that are available for understanding it, and the mix of actors involved.

Klein refers to this reflective problem planning as “flexible execution” or “Flexecution.”220 Within the framework of intelligence sensemaking, it provides a self-reflective process—at the individual and organizational levels—that monitors the goals and whether what is understood or being done is consistent with those goals, modifying those goals as understanding emerges.

Foraging
Hunting and Gathering

If, as Baumard asserts, “Intelligence, a continuous human activity, gives sense to the stimuli received from the environment [then] these stimuli [must] be passively or actively sought.”223 This requires hunting and gathering. They comprise foraging, which in turn refers to “a wide search over an area in order to obtain something.”

An apt analogy for the foraging activities of intelligence professionals can be drawn from anthropology, where the activities of the hunter-gatherer have been immortalized. In nonagricultural societies people both hunt for specific game and take advantage of what the local environment provides.225 Similarly, intelligence professionals may seek specific information, often tasking collection systems as part of the search. They also take advantage of existing repositories of information. Neither approach is wholly satisfying nor provides for all of the sensemaker’s needs all of the time. However, without the basic act of foraging there can be no sensemaking as there is nothing from which to make sense.

Information foraging is a rich subject about which Peter Pirolli has done extensive work, some of it sponsored by the Intelligence Community’s Novel Intelligence from Massive Data research project funded by IARPA’s predecessor, ARDA (Advanced Research and Development Activity).

Underlying information foraging theory is the idea that “humans actively seek, gather, share, and consume information to a degree unapproached by other organisms” and therefore, “when feasible, natural information systems evolve toward stable states that maximize gains of valuable information per unit cost.”

Efficiency derives from optimizing the time necessary to achieve a goal, the quality of the achievement, and the satisfaction obtained in doing so.227In application to the human information foraging scene, the theory becomes “a rational analysis of the task and information environment that draws on optimal foraging theory from biology and…a production system model of the cognitive structure of [the] task.”

Peter Pirolli and Stuart Card, “Information Foraging,” Psychological Review, vol. 106, no. 4

Toward a Practice of Intelligence Foraging

Developing an optimal foraging model for information acquisition requires the subject to consider whether to remain at a source that provides a superabundance of information of questionable value, or to seek another, more valuable, source.229 Pirolli and colleague Stuart Card observe that for human foragers, this involves “a tradeoff among three kinds of processes”: exploring, enriching, and exploiting.230

These three foraging steps will not seem foreign to traditional intelligence practitioners. “Exploring” is a breadth activity whereby a sensemaker broadly examines a wide variety of information that may or may not be relevant to the issue. The premise is that when one considers a broad variety and volume of data, there is less opportunity to miss “something novel in the data.”231 Speaking in traditional intelligence terms, exploring is like reconnaissance. By contrast, “enriching” is a depth activity. Here the sensemaker identifies areas of interest and focuses attention on those areas. As Pirolli and Card note, this is “a process in which smaller, higher-precision sets of documents are created.”232 Reconnaissance has become more narrowly focused, but highly targeted “surveillance” is not yet in play. Finally, the practitioner “exploits” the results of foraging by thoroughly examining what is found and extracting information as needed. This activity extrapolates from tacit sensemaker behaviors and information-based patterns to create hypotheses about what the information means. At this point, foraging evolves to sensemaking.

The appropriate amount of exploration depends on the context. However, there appears to be a limit after which more information does not increase accuracy although it does increase the sensemaker’s overall confidence.

The discussion of how much exploration is needed is germane because in the Pirolli-Card framework the sensemaker may believe she controls the amount of foraging. In a sense this is true. The sensemaker will stop foraging once she believes she has what she needs. But how much information is sufficient? Slovic’s results and Heuer’s subsequent discussion suggest that practical sufficiency is achieved at lower levels of exploration than expected.235

Further, contrary to the beliefs of the sensemaker, it is often information itself that controls the processes. Intelligence professionals are over- whelmed with more and more information that arrives faster and faster and may be valuable for shorter and shorter periods of time. This information flood challenges the sensemaker to efficiently find the information needed in order to make sense of the phenomena or issue under scrutiny in a timely manner. Does one need to peruse all the information received? Would a different source be more productive by providing more focused information? These are examples of the difficult questions the sensemaker must consider. Compounding her deliberations further is the fact that she must answer this question in foresight; hindsight is too late.

Foraging practice begins with an understanding of what it is the sensemaker seeks to know, the foraging resources available, and the urgency of the issue. But how can an intelligence sensemaker know what to look for? To what degree does she need to explore, enrich, or exploit the information? Further, how does she know if she is getting what she needs?

A first step is to think critically about the issue itself and the resources needed. Using a metacognitive, process-focused critical-thinking model such as that adapted from Richard Paul, Linda Elder, and Gerald Nosich, the practitioner can dissect the issue and her thinking on the issue.

She makes assumptions explicit, explores relevant points of view, starts to con- sider the ramifications of the issue, and she asks important questions about what resources will best inform her about the issue; she considers the con- text in which she is working, and ponders the alternatives to her reasoning about where and how to forage. This critical thinking defines her foraging activities. She may engage in all three strategies at once or at different times as she engages in the analyses that produce her syntheses and the interpretations necessary to generate knowledge. A well-developed understanding of exploited information may direct her back to do additional exploring or enriching (or both).

In intelligence foraging as it traditionally has been practiced, there is a tendency to linger at a fruitful source rather than to explore elsewhere for the required information.

The danger, of course, is that the practitioner may limit the information she acquires and the relevant perspectives it informs. If, for example, the practitioner has a belief that two parties in whom she has an interest communicate via one means and she can acquire technical collection that captures the communications via those means, she may ignore the fact that they also use other methods to communicate. She may subsequently fail to task systems that collect those other communications in the belief that what she is getting suffices. Should the parties suspect that their communications are being targeted, they may engage in deceptive practices over that collected means and use the other non-collected methods for their real exchanges.

Foraging Challenges

Ongoing research at University College London offers a view of how younger sensemakers likely search for information. The researchers report that people foraging for information spend four to eight minutes viewing each resource.237 Thus, a great many resources may be consulted but none of them very deeply. Within the context of intelligence, such foraging strategies may facilitate broad searches but leave open the question of whether deeper searches are also accomplished.

An additional challenge with information foraging is that if the practitioner misses the opportunity to acquire something, it may never again be obtainable. Like the elements of a fleeting interpersonal conversation, the original foraging behavior, if left un-captured, can never be recaptured. Indeed, there may be no indications that such a conversation even occurred.

A further consideration for the practitioner is the “self-marketing” of the information. Vivid stories market themselves much better than do flat ones. Exploited information that supports a favored hypothesis may be preferred over information that does not; an unfortunate reality is that little motivation remains for further exploration. The practitioner is human—she will not likely have a truly agnostic attitude about what she seeks and why.

Compounding this is the idea that sources and means for foraging are self-protective. For example, there is a presumption that sources will continue to communicate via specific means. The methods that capture those communications and the people that support them tend to seek justification. Assets may be kept active after their usefulness expires. The practitioner returns to the same sources over and over because they have been useful in the past and such attention helps keep those sources actively collecting.

Critically assessing what she is doing is one way the sensemaking practitioner may be able to overcome these limiting tendencies. By constantly asking herself how she is thinking about the issue, what she seeks, other perspectives, her assumptions, as well as relevant concepts such as self-deception or adversarial deception, the practitioner may diminish the impact that her preferences play on her foraging decisions.

Another part of this critical assessment is the consideration of the costs of foraging. As Herbert Simon notes,

What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention, and a need to allocate that attention efficiently among the overabundance of information sources that might consume it.

The superabundance of information available on the Internet (and elsewhere) creates a “poverty of attention” to any one source. Rather, people skim across a great many sources.

A predator may go out seeking one type of prey and find none of it but there may be an abundance of some other kind of game. Within the context of intelligence such opportunism may or may not be appropriate (or even legal) when a technical system or an asset is tasked to provide information for intelligence. Lacking the desired information, a human source might opportunistically substitute what might be perceived as desired or desirable, even if it is not closely related to the issue at hand—or for that matter, even “true.” It is used because it satisfices for the immediate term.

Harvesting

A special case of foraging involves “harvested” information. Technical agencies that field systems to gather information also can be characterized by a different model, that of harvesting. The systems employed simply harvest that which lies within their purview, then process it and store it in silos— data repositories from which sensemakers subsequently must forage.

Such systems are efficient at creating broad collections; they tend to be inefficient and unreliable when very narrowly focused. Thus, directed rather than broad collection of specific phenomena is needed.

Technical collection systems tend to provide—even in the negative—what sensemakers want to find. This can create a potentially dangerous confirmation of an idea that may be invalid.

Automated retrievals from information repositories typically pro- vide sensemakers with what they believe to be the needed and relevant information—in short, their evidence. The evidence pertaining to specific issues arrives at the sensemaker’s desk and reports are issued. At first, the evidence is carefully scrutinized and the system that provides it assessed. As the process repeats, however, as it certainly did in the Cold War era, complacency may set in. Critical assessment of quality and quantity may cease.

Finally, no amount of foraging can discover valuable information if it has not been collected by some system—human or technical—in the first place.

Marshaling

What can be done to revolutionize the way information foraging is accomplished so as to overcome or at least mitigate these problems? Some answers lie in an understanding of marshaling. Part of the sensemaker’s practice is to turn foraged and gleaned information into evidence.

This is a broad activity, for if the issue has multiple explanations or future possibilities, then evidence will be information relevant to any, many, or even all of those explanations or possible outcomes. In order to make that determination, the sensemaker will need to have identified what those alternatives are and to have collected information (both dis-confirming and confirming) about them.243 This may require foraging from additional resources with all the attendant challenges discussed above.

Understanding

If we presume that foraging has yielded relevant and valuable information—evidence—on the issue under study, the next step is to determine what it means. This is the heart of sensemaking: evidence is dissected, reassembled

is the sensemaker focusing on individual actors, the actions of a collection of actors, the beliefs that guide the activity, or the processes that determine the actions of the collective?

The disaggregation of each of these perspectives and their associated stories provides a rich brew for sensemaking.

Synthesizing

Synthesizing is “the combination of ideas to form a theory or system.”245 Even as the intelligence professional analyzes the individual pieces of information, they are synthesized into a mental picture of the larger issue. Pieces of information are implicitly combined even when the sense- maker works within the yield of a particular foraging discipline or within a frame or reference. Such synthesis drives further foraging and analysis.

Synthesis needs to be explicit. In the example developed above, the intelligence professional is required to synthesize the differing trajectories

of the three principal actors, considering how their beliefs harden or soften their positions and how they are vulnerable to the actions, influences and processes of the groups. Doing so in a systematic fashion leads the intelligence professional to new insights about the situation: what is going on… and (from the U.S. perspective) what to do about it.

 Interpreting

Issues can be dissected and reconstructed in a variety of ways, creating different meanings. Sense must be made of these different meanings. Interpreting, or “the action of explaining the meaning of something,” is another component of sensemaking.246 We may say that whereas analysis and synthesis establish the what, interpretation establishes the so what.

A revolutionary approach to sensemaking now being undertaken by analysts from DIA, State, and CIA, is to engage in “adversarial briefing” of principals, where briefers adopt opposing perspectives for a thorough airing of the issue, complete with the participation of the principals themselves.

Communicating

New models of knowledge transfer recognize change in both message and medium. Social networking, peer-reviewed shared multimedia, and interactively blogged communications are examples of these new mediums. The message is short and subject to change by different contributors. Authority is based on consensus. The distinction—if it exists at all—between formal and informal communication is blurred. There are dangers here as authority and truth are no longer necessarily linked. One risk is that the “wisdom of a crowd” can in fact be the “madness of a mob”—a phenomenon occur- ring in both the public arena and within the IC’s blogosphere. In both arenas the loudest voices strive to bludgeon into silence those who would disagree, all the while advancing their egocentric or sociocentric positions. Scientific knowledge and empirical facts matter little in such cases.

In summarizing the “introspective works responding to…intelligence failures,” Charles Weiss agrees that intelligence practitioners’ failures include a lack of proper attention to hypotheses and data collection efforts that are contrary to what they regard as the most likely interpreta- tion of available information.”248 One danger is that the very judgment about which the sensemaker is least confident might be the one that turns out to be correct. The fallacy of depending on the communication of confidence levels relates to the fact that each assessment or report only fills in some unknown portion of the gaps in the sensemaker’s and policymaker’s knowledge.

a carefully considered, standardized metric of uncertainty could pro- vide one means of assessing and communicating confidence independently from the sensemaker. Weiss suggests that either Kent’s scale250 or its more recent instantiation by the Office of the Director of National Intelligence offers an appropriate means by which the uncertainty could be systematically captured.251 The challenges inherent in such metrics are twofold. First, the evidentiary statistics necessary for their use are “typically unavailable to intelligence analysts—or it they are available, must be based on small samples of past events.”252 Additionally, scoring the conclusions from such small samples across production lines and even from day to day by a single intelligence professional can be observed to be inconsistent. Steve Rieber discusses calibrating sensemakers as a solution.253 To date no such strategy has been implemented.

CHAPTER 5
A Practice of Understanding

Judgment in intelligence sensemaking, as in a number of other domains, likely improves as one progresses to higher levels of proficiency and expertise. However, prediction is both difficult and inherently unreliable because events of different kinds vary considerably in their inherent predictability.

Intuition

It is difficult to wrap appropriate words around the concepts that are at hand, thus care should be taken to make certain distinctions. Sometimes, judgments can be rapid, non-conscious, non-deliberative, and almost seem as if they are immediate perceptions and feelings rather than judgments.

Intuitive, or, as it is sometimes called, automatic thinking forms the basis for much of our personal sensemaking.260 It allows us to process complex inputs that far exceed the “span of immediate apprehension” of approximately seven “chunks” or elements that individuals can consciously process in working (or short-term) memory.261 Such thinking, for example, explains how people (mostly) drive successfully. Recent research revealing a correlation between cell phone use (especially texting) and accidents leads one to extrapolate that attention-requiring activities such as engaging in professional-level office conversation or dialing and conversing while mobile respectively impede otherwise effective automatic activities such as ten-finger typing or absent-mindedly but safely driving an automobile.

There is another side of intuitive reasoning that sometimes works in opposition to one’s survival. Intuitively reasoned responses to stress are often highly focused and narrow. Laurence Gonzales notes that in such reactions “the amygdala…in concert with numerous other structures in the brain and body, help to trigger a staggeringly complex sequence of events, all aimed at producing a behavior to promote survival.”266 But, in many cases, behavior is also locked down.

Intuitive or automatic thinking is a survival mechanism.268 Gonzales notes that such mechanisms “work across a large number of trials to keep the species alive. The individual may live or die.”269 But over time—and in reference to humans—generally more live than die, leading to evolution and the genetic transmission of the “reflex.” If a particular set of behaviors confers a survival value, that set can become more widespread in the population. Seen in this light, unease at entering an elevator at night could be a modern instance of sensing shadows in the tall grass.

On a shorter time horizon, people use experience-based intuitive patterns or mental models. These patterns or models direct how situations are perceived and how they are responded to.

We turn next to an examination of types of judgment, distinguishing between those that are skill-based and those that rely on “heuristics,” or learning through personal discovery whether rules of thumb are valid shortcuts to understanding an issue. We then link this dissection of judgment to the work of intelligence professionals.

Types of Judgment

First we must clarify the meaning of “judgment.” A judgment can be an observer’s belief, evaluation or conclusion about anything—one can form a judgment about anything of interest, including one’s own reasoning.275 Judgment also describes a process, surely more than one kind of mental process, by which one reaches a decision.

Judgment can be expressed as affective evaluation (example: That is a good thing), objective evaluation (It looks like a cat, but it is just a stuffed cat), or categorical assignment (My judgment is that this is a case of highway robbery). Judgment as process can also be described as apodictic, modal, or oral, among others.276

In terms of intelligence sensemaking, successful intuitive judgment arises from the tacit knowledge of experts who assess “normal” (in Kuhnian terms) situations, or as has been discussed above, the tame, familiar or regularly occurring kinds of problems (although they may be quite complex).

However, it should be noted that the combination of skill-based and heuristic-based intuition confers a benefit to mindful experts: a sense of when a case seems typical at first glance, yet there is something not quite right about it. While the less experienced person may be lulled into believing the case fits a certain type, expert decision makers react differently. They note some worrisome clue that raises questions. “Maybe this is not a typical case,” they venture. Eventually they may come to see that the case is in fact atypical. So informed, they make a different judgment, sometimes in disagreement with other experts. As we will now see, intuition certainly becomes a part of the intelligence process when practitioners make, or fail to make, useful and accurate predictions.

The Question of Predictability

Some in the IC argue that the pertinent aspects of all intelligence problems can be adduced, “if we only knew more” or “had the right algorithm or method.” However, we mislead ourselves if we believe that any messy problem can be resolved with a probability-juggling program. The authors are reminded of the observation, “There are the hard sciences and then there are the difficult sciences.” It is both impossible and inappropriate to attempt to remake social and cognitive sciences into emulations of calculational physical sciences. If the reduction were possible, someone would have achieved it, or would have at least made demonstrable progress toward its realization, in the 200-plus years during which psychology and the other “social sciences” have called themselves “sciences.” If reduction was appropriate, and we could get “the right information to the right person at the right time,” we would not need that right person—“truth” would be self-evident.

Some examples of mindful, heuristic-based decision making, especially pertinent because they involve the thinking habits of senior U.S. civilian and military officials as well as of their strategic advisors, are discussed in Neustadt and May’s Thinking in Time.292 The authors point out that a sensemaker’s awareness of historical decision making in even loosely analogous situations helps to keep at bay the further unsettling idea that the present circumstances constitute a “crisis.”

Anchoring,” which is the biasing of a judgment because of the framing of the initial question, and “attribute substitution,” arising from replacing a difficult question with an easier one, are two contributors to such flawed intuitive judgments.

Thinking About Anticipating

Jurisprudence, clinical psychology, and economic forecasting are all examples of domains where accurate prediction is difficult or impossible, and it is not terribly clear what it means for a person to be an expert in any of those fields. In the realm of jurisprudence, studies of the low rate of successful intuitive predictions about future recidivism among paroled offenders serves as one of many pieces of evidence showing that even highly experienced professionals in certain domains may be no better than laypersons at making intuitive judgments.

Anticipating Deception: Applying the Space-Time Envelope

A recurring worry within alert intelligence services is whether they are being deceived by their adversaries. From Troy, in the second millennium BCE, through to mis-direction schemes in World War II, and on to the lead-up to the Iraq invasion of 2003, adversarial deception has played a strong or decisive role in final outcomes.307

Implications of Visualizing Anticipation

Diagramming using Concept Maps (and related kinds of diagrams called causal maps and cognitive maps) has been used as a de-biasing technique for analysis under uncertainty. This use is well known in the field of business and strategic management:

in the pages of The Journal of Strategic Management, Gerard Hodgkinson and his colleagues added:

In addition to providing a useful means for gaining insights into the nature and significance of cognitive processes underpinning strategic decision making, this dynamic emphasis on antecedents, behaviors and consequences, renders causal cognitive mapping techniques particularly attractive as a potential means for overcoming the effects of framing (and possibly other cognitive biases) in situations involving relatively complex decision scenarios.313

Hodgkinson et alia investigated “the extent to which judgmental biases arising from the framing of risky decision problems [could] indeed be eliminated through the use of this particular cognitive mapping technique” and found cognitive mapping to be “an effective means of limiting the damage accruing from this bias.”314

The Roles of Intuitive Thinking in Intelligence Sensemaking

Given these considerations, what are (or should be) the roles of skills- based intuitive and heuristic-based intuitive thinking in intelligence sensemaking? Many, if not most, intelligence professionals have had a “feeling” about an issue and what is going to happen. Sometimes those intuitions are correct, particularly if the requirement entails real-time observation and situational awareness. When it comes to anticipatory sensemaking, however, the authors suspect that intelligence professionals may fare no better than does the average citizen in predictive situations.315

There are a number of reasons for this, not the least of which has to do with the availability of evidence, or relevant information. A somewhat persistent myth about intelligence is that its professionals have access to all the information they need and that they can get any and all other necessary information. This not only simply is not true but is likely highly undesirable. While it is true that intelligence professionals must make their assessments based on incomplete, often faulty, and sometimes deceptive information, at least they can do so. Forcing them to try to make sense of all the relevant information relating to an issue would likely burden them sufficiently so as to preclude anything but the most general findings being issued—if anything can be issued at all. Finally, as was discussed above in relation to figure 1 (see Chapter 3), complexity, ambiguity, and uncertainty increase as one moves from Descriptive to Estimative (or Anticipatory) Intelligence.316

intelligence professionals compete against other foreign intelligence organizations whose professionals may be as skilled at obfuscating what their factions, groups, or nations are doing as we are at making sense of what they are doing. Sometimes that other side is, in fact, better. And, like a closely matched sports event, the difference between valid and true sense- making versus invalid or untrue sensemaking—or even no sensemaking at all—might be the result of luck.

Since such Type 2 domains are ones in which the primary task goals involve the understanding and prediction of the activities of individuals or groups, accuracy and precision are elusive. Consequently, as has been noted, Type 2 domains are also characterized by tasks involving a lack of timely feedback and a paucity of robust decision aids. Further, if the object of study does not know what she or they will do, how can someone else predict it reliably? Therefore, is it any surprise that in such domains, intuition is limited in its usefulness? What are intelligence professionals to do?

It should be reiterated that although over-estimative errors in intelligence sensemaking, as has been noted, are unacceptable, under-estimative errors are even less tolerated. It is better to have warned and been wrong than not to have warned and been wrong. False alarms are better than misses. A warning about an attempt by another individual to set off a bomb on a subway system—which subsequently does not occur—generates far less uproar (if any at all) than does a failure to warn of an individual who in fact plans to blow up an airliner, and through anticipatory sensemaking, being able to catch him preemptively.

It is by no means obvious that simply throwing more information at a problem will make solving it any easier.

Future Vision: Red Brains, Blue Brains?

Darren Schreiber et alia used functional MRIs (magnetic resonance imaging tests) to assess how people associated with the U.S. Republican and Democratic political parties deal with risk. The researchers discovered that members of the two groups used distinctly different portions of their brains when making “winning risky versus winning safe decisions.”324 The authors note that the different portions of the brain play different roles in human cognition and conclude that

it appears in our experiment that Republican participants, when making a risky choice, are predominantly externally oriented, reacting to the fear-related processes with a tangible potential external consequence. In comparison, risky decisions made by Democratic participants appear to be associated with monitoring how the selection of a risky response might feel internally.325

While neurocognitive paradigms for intelligence sensemaking have not yet formally been identified or established, implications of this work—to the degree that intelligence professionals can speak to the concerns of decisionmakers who are, after all, particular political partisans—may be significant. The research to date shows that the cognitive mechanisms and especially the emotion-based attitudes of partisan sensemakers shape their reasoning as they assess uncertain and risky phenomena.

Looking Ahead

Intuitive reasoning is something that we do naturally, all the time. It cannot be prevented, is not easily neutralized, and it is sometimes useful and necessary in sensemaking. While it can be reliable when employed as the sole basis for actionably valid, predictive intelligence creation in Type 1 domains, it is highly fallible when used for intelligence creation in Type 2 domains.

What can be done to challenge and validate the surety one has about an intuitive intelligence judgment? Employing approaches to reasoning such as those found in critical thinking seminars and courses, especially as currently offered across the IC’s educational institutions, and developing skills that aid mindfulness (as discussed in the Preface), offer possible means of accomplishing calibrated reasoning.

CHAPTER 6 Considering Validation

How does one know if the knowledge that intelligence sensemakers create is itself valid? Does accuracy alone ensure validity? What was accurate when findings were communicated may not be accurate subsequently. This flux suggests a strong procedural basis for validation. For example, were steps followed to avoid perceptual errors and cognitive traps? Was the process documented? Were alternatives adequately explored? Given the inherent uncertainty in intelligence judgments, it remains possible that all the appropriate processes may be sufficiently applied and yet the judgment is wrong

Analogies from Other Fields

Medicine

Medical practice is at times presented as having notable similarities to intelligence practice. For example, with respect to validation, an ultimate metric for failure in medicine is that the patient dies. But is medicine successful if the patient lives? At what quality of life and for how long are two additional questions. Perhaps death with a minimum of suffering is the most favorable medical outcome—is this a success? Depending on the specifics of the case, maybe it is.

Jurisprudence

Jurisprudence is an adversarial system in which the ultimate confrontation is a trial wherein two advocates make inferences about evidence to argue opposite sides of a case before an impartial third entity or body (often of non-experts).

So, how does one measure success? There are at least five points of view involved in jurisprudence: That of each of the advocates, the judging entity (a jury or judge) the accused person, and the community or government. Each party, depending on the verdict, has a different metric for success. In certain types of cases such as those involving child molestation or alleged terrorism, the accused person tends to be deemed guilty by the community, prosecuting advocate, and government even if exonerated. In all cases where an opinion—particularly in the media—runs counter to the majority’s views, the conclusion may be made that the court failed to render the “right” verdict

Science

Science involves a number of metrics that include a sound method of documenting both process and results, as well as replication. Work is considered preliminary and non-definitive if it has not been replicated.

despite the intellectual appeal of theories about genetic links to specific behaviors, “there are few replicated studies to give them heft.”331 In other words, the underlying theories may not be sound.

Science depends on refutation of alternative hypotheses, and replication studies attempt to refute that which has been shown. It is quite acceptable to be wrong so long as one admits it when the fact becomes apparent. Indeed, one model for science is that of competitive cooperation. Scientists attempt to tear down the new work of colleagues—without resorting to personal attacks. This dialectic approach may last generations or longer. In the process new knowledge is discovered and—if it cannot be refuted—validated.

Replication in Intelligence

The inability to replicate much of the process of sensemaking in intelligence limits the application of this indispensable practice of science. The pressures of real-time production inhibit the re-visitation of past judgments, although with at least one recent National Intelligence Estimate, the repetition of “alternative analysis” led to the questioning and revision of the original conclusions. Essentially, any meaningful replication of intelligence phenomena can only be accurately made in foresight, as in a National Intelligence Estimate.

when intelligence students faced with a scenario involving three fictitious nations at odds with each other develop a common set of hypotheses regarding who will initiate a war and with whom, and are then given a finite set of evidence and a common method such as the Analysis of Competing Hypotheses, they come to similar conclusions as to which hypotheses are the least likely and therefore which eventualities can be expected.334 Unfortunately there does not yet exist a similar body of results for real intelligence problems interpreted through the lenses of different intelligence disciplines and sources.

Yet, replication remains an important metric of the intelligence sensemaking process. As Caroline Park notes, “[the] basic reason research must be replicated is because the findings of a lone researcher might not be correct.”

Lynn Hasher, David Goldstein, and Thomas Toppino concluded that confidence in assertions increases through repetition of the assertions in situations when it is impossible to independently determine their truth or falsity.338

Validation in Foresight and Hindsight

People—and intelligence practitioners and their customers are merely people—evaluate judgments they have made in hindsight. They believe, according to Mark and Stephanie Pezzo, “that one could have more accurately predicted past events than is actually the case.”340 Thus, hindsight occurs at least in part because as people make sense of “surprising or negative” events, “the reasons in favor of the outcome [are] strengthened, and reasons for alternative outcomes [are] weakened.” Further, in hindsight all the relevant facts may be known whereas in foresight this is not the case. But evaluating “mistakes” in hindsight obscures an important point made clear by Taleb: Mistakes can only be determined as such by what was known at the time they were made and then only by the person making the mistake.341 In other words, mistakes need to be evaluated from the points of view held in fore- sight. And seen from that perspective they may not be mistakes at all.

Applied to intelligence sensemaking, this means that many so-called intelligence errors and failures may, in fact, be well-reasoned and reasonable judgments based on what is known prior to the decision. Certainly, when viewed in hindsight they were wrong. But in foresight they were accurate and valid to the best of the sensemaker’s abilities. How can this enduring problem be mitigated?

Validating the Practice of Intelligence Sensemaking

What else contributes to bringing about validated sensemaking? If a method does not do what it is commonly purported to do, is it invalid? This is one question that has been raised with regard to the Analysis of Competing Hypotheses, or as it is commonly known, ACH. Richards Heuer, Jr. initially developed the method for the detection and mitigation of attempts at adversarial denial and deception.343 ACH forces consideration of alternative explanations for, or predictions about, phenomena.344 It forces consideration of the entire set of evidence, not “cherry-picked” trifles that support a favored hypothesis.

a study by MITRE failed to show that it does eliminate the confirmation bias.345 Both the MITRE study and an earlier one by NDIC student Robert Folker do suggest that ACH is of value when used by novice intelligence professionals. However, Folker tentatively concluded that experts seem not to be aided by the method.346 Is it still a valid method for intelligence sensemaking?

Perhaps it is. The method provokes detailed consideration of the issue and the associated evidence through the generation of alternative explanations or predictions and the marshaling of the evidence. It asks the sensemaker to establish the diagnosticity of each piece of evidence.

ACH further makes explicit the fact that evidence may be consistent with more than one hypothesis. Since the most likely hypothesis is deemed to be the one with the least evidence against it, honest consideration may reveal that an alternative explanation is as likely or even more likely than that which is favored. The synthesis of the evidence and the subsequent interpretations in light of the multiple hypotheses is also more thorough than when no such formalized method is employed.

Indeed, Robert Folker’s “modest” experiment in applying qualitative structured methods—specifically ACH—to intelligence issues showed that “analysts who apply a structured method—hypothesis testing, in this case— to an intelligence problem, outperform those who rely on “analysis-as-art,” or the intuitive approach.”347 Simply put, Folker experimentally showed that method improves the quality of practitioners’ findings. Folker’s study offers evidence that “intelligence value may be added to information by investing some pointed time and effort in analysis, rather than expecting such value to arise as a by-product of ‘normal’ office activity.”348

One variation on ACH implementation provides a structured means of developing issues. As applied by the faculty and students of the Institute for Intelligence Studies at Mercyhurst College, practitioners begin with a high-level question and use sequential iterations of ACH to eliminate alternative explanations.349 The next phase takes the “non-losers” and develops them further. Another round is conducted and again the “non-losers” are selected and further developed. While this could generate a plethora of branching explanations, in reality it is the author’s experience that it tends to disambiguate the issue fairly efficiently. At worst the structuring inherent in the method leaves the sensemaker with an in-depth understanding of the issue; at best, a couple of eventualities and their likely indicators are determined. Assets can then be tasked, foraging conducted, and more exact determinations made as the issue develops.

Johnston observed that the IC has at its disposal “at least 160” [ana- lytic methods]…but it lacks “a standardized analytic doctrine. That is, there is no body of research across the Intelligence Community asserting that method X is the most effective method for solving case one and that method Y is the most effective method for solving case two.”353 As referenced here, such a doctrine arises out of knowledge that the specific methods are valid, in other words it has been demonstrated empirically that they actually do what they claim to do. Such a doctrine proffers a menu of sensemaking options dependent on the goals of the sensemaker. The current model, where validity is presumed by intelligence professionals because they are taught the method(s) in the community’s training schools, is insufficient because—at the most basic level such a metric is insufficient for determining validity. Further, there is little sense of what methods are appropriate in what situations. A claim of “we always do it that way,” is known to be insufficient but remains part of the “corporate analytic tradecraft.”

Heuer noted that “intelligence [error] and failures must be expected.”354 One implication of this assertion is that intelligence leadership cannot fall back on a “lack of skills” excuse when the next major intelligence failure occurs. However, without validating a canon of method and a taxonomy to characterize its use, intelligence professionals will remain hamstrung in their efforts to make fuller sense of threatening phenomena, increasing the likelihood of error and failure.

Seeking Validation: Toward Multiple Methods

Within the canon of social science method lies an approach to sense- making that may offer intelligence practitioners a means of disambiguating the wicked mysteries as well as the hard puzzles they face daily. Even in current practice, intelligence practitioners employ this approach when they do not rely on merely one method for sensemaking. Multi-method intelligence sensemaking explores complex issues from multiple perspectives. Each method used—such as ACH—provides an incomplete understanding of the issue, leaving the intelligence professional the task of making sense of the differing sensemaking conclusions.

intelligence professionals who engage in a “multiframe” sensemaking approach consider issues from multiple points of view created from the intersections of action- and process-focused vantage points and the perspectives of the individual and the collective. As developed by Monitor 360 for the National Security Agency’s Institute for Analysis, it facilitates sensemakers’ developing different answers to the intelligence question at hand.356 They must combine the differing results, in other words, synthesize and interpret partial answers, in order to better understand the issue underlying the question and to determine a best (at the time) understanding of the issue.

The lexicon of multi-methodology provides a term for this combinatorial activity: triangulation, or pinpointing “the values of a phenome- non more accurately by sighting in on it from [the] different methodological viewpoints employed.357 This is a process of measurement—which to be useful (accurate) “must give both consistent results and measure the phenomenon it purports to measure.”358 In other words, triangulation requires that the methods employed are repeatable and valid. Intelligence creation requires that those methods be applied with rigor.

CHAPTER 7
Making Sense of Non-State Actors: A Multimethod Case Study of a Wicked Problem

we find that a diversely practiced, multimethod approach that does incorporate a specific process, organizing principles, and an operational structure can fulfill the need for 21st century intelligence sensemaking. Such an approach reflects a Kendallian approach to intelligence sensemaking: It collaboratively paints a picture for a decision-maker rather than presenting a “scientific fact.”

Introducing the Wicked Problem of Non-State Actors

It is commonly thought that non-state actors are emerging as a dominant global force in the realm of national and international security, yet conclusive evidence confirming this belief is lacking. Part of the challenge is that non-state actors fit the profile of a wicked problem. While it is true they can be identified (the name accomplishes this—they are non-state versus state actors), there is no commonly accepted definition.361 In other words, non- state actors are defined by what they are not, leaving room for disagreement as to what they are. Further, some non-state actors can be characterized as “good” and others as “bad.” Differing points of view about whether a particular non-state actor is “good” or “bad” leads to varying characterizations of their activities.

National Intelligence Council, “Nonstate Actors: Impact on International Relations and Implications for the United States,” Conference Report, August 2007, URL: <http://www.dni. gov/nic/confreports_nonstate_actors.html>, accessed 10 May 2010, The Conference Report suggested that “Nonstate actors are non-sovereign entities that exercise significant economic, political, or social power and influence at a national, and in some cases international, level. There is no consensus on the members of this category, and some definitions include trade unions, community organizations, religious institutions, ethnic groupings, and universities in addition to the players outlined above”

Issues involving non-state actors lack clear definitions and are resistant to traditional intelligence approaches due to their open-ended nature; potential solutions to problems are neither clearly right or wrong; and difficult-to-discern and complex inter-linkages exist, although drivers for issues involving non-state actors can be identified

Three Approaches to Making Sense of Non-State Actors

The starting point for this case study was a 2007 National Intelligence Council (NIC) Desktop Memorandum that analyzed key findings from a series of seminars co-hosted with the Eurasia Group, a global political risk research and consulting firm.

The Memorandum observes that non-state actors are of interest “because they have international clout, but are often overlooked in geopolitical analysis.” The implicit but demanding questions of why and how much non-state actor “power” and “influence” have increased worldwide was not answered.

363 See the Eurasia Group web site, URL: <http://www.eurasiagroup.net/about-eurasia- group>, accessed 14 May 2010.

364 National Intelligence Council, “Nonstate Actors: Impact on International Relations and Implications for the United States,” Conference Report, August 2007, URL: <http://www.dni. gov/nic/confreports_nonstate_actors.html>, accessed 27 April 2010, 2. Cited hereafter as NIC, “Nonstate Actors.”

Key Findings of the Mercyhurst Study on Non-State Actors

Students in the Mercyhurst College Institute of Intelligence Studies (MCIIS) focused on the roles non-state actors play and their expected impact in Sub-Saharan Africa over the next five years (results), and on building a multi-methodological paradigm for considering the issue (process).369 Within this context, three additional questions were raised:

  • What is the likely importance of [Non-State Actors] vs. State Actors, Supra-State Actors and other relevant categories of actors in Sub- Saharan Africa?
  • What are the roles of these actors in key countries, such as Niger?
  • Are there geographic, cultural, economic or other patterns of activity along which the roles of these actors are either very different or strikingly similar?370

The students developed a scoring system for both lawful and unlawful non-state actors, in terms of the socio-political environment, and applied this index to all 42 Sub-Saharan African countries (figure 7). The scoring characterized the roles of non-state actors vis-à-vis government and non-government interactions based on four drivers: An “ease of doing business” variable and a contrasting “corruption perception” variable; a democracy variable and a contrasting failed states variable.373 Stable and failing states were revealed to have differing interactions with non-state actors. In the former, non-state actors were lawful actors who tended to have government-sanctioned role potentials, whereas in the latter they were typically unlawful actors engaged in anti-government roles.

Mapping significant multinational corporations, NGOs, and terrorist organizations to specific countries as representative of non-state actor activity revealed correlations between role potential spectra and geospatial data, whereby each generally supported the other.375 Thus, geospatial sense- making tended to confirm the conclusions derived from the non-state actor role spectra.

Key Findings of the Least Squares Study on Non-State Actors

The Least Squares study of non-state actors began with the hypothesis that “non-state actors emerge in vacuums and voids.”376 Their study focused on the issue of violent and non-violent non-state actors but also explored a set of contingent methodological approaches. The inquiry sought to contribute novel understanding of non-state actors by

synthesizing available data and disparate taxonomies,…by generating and testing hypotheses concerning the key dynamics driving the transfer of power from states to [non-state actors] and favoring the emergence of novel [non-state actors] under globalization; and…by investigating the development of methodologies that might be most useful for future research.377

Two key findings revealed the critical role of environmental knowledge and of public expectations in motivating non-state actors, both as individuals and as members of the collective. Such findings were found to be significant to efforts aimed at mitigating the recruitment of specific Al Qaeda- associated individuals to assail the United States. Additionally, the team found that an approach based on critical thinking led to reasoning pathways that likely would not have been noticed or explored had a more intuitive and less rigorous approach been employed.378

Approaches and Methodologies Thinking Critically about the Issue

In order to impose structured thinking on a highly unstructured problem, the NIC advisor to, and the members of LSS first inventoried their own understanding of the non-state actor issue using the “eight elements of reasoning” developed and espoused by the Foundation for Critical Thinking and used throughout much of the IC.379 These elements include:

  • Question at issue (What is the issue at hand?)
  • Purpose of thinking (why examine the issue?)
  • Points of view (What other perspectives need consideration?)
  • Assumptions (What presuppositions are being taken for granted?)
  • Implications and consequences (What might happen? What does happen?)
  • Evidence (What relevant data, information, or experiences are needed for assessment?)
  • Inferences and interpretations (What can be inferred from the evidence?)
  • Concepts (What theories, definitions, axioms, laws, principles, or models underlie the issue?

Literature Consultation

Concurrent with their critical thinking, MCIIS, LSS, and others examined key academic and applied-academic works related to the assessment of non-state actors. Notable among them, work by Bas Arts and Piet Verschuren describes a qualitative method for assessing the influence of stakeholders in political decision-making.380 The “triangulation” referred to in their title encompasses “(1) political players’ own perception of their influence; (2) other players’ perceptions of the influence brought to bear; and (3) a process analysis by the researcher.”

Another contribution in the applied realm came from recent work by a new generation of military (and ex-military) authors who see the rise of non-state actors as a seminal event that will drive U.S. national security strategy. Among these sources is Warlords Rising: Confronting Violent Non-State Actors, whose authors anchor their work in open systems theory (the concept that actors and organizations are strongly influenced by their environment).382 In particular, they ask what environments give rise to violent383 non-state actors, what sustains them, and how changes to those environments might disrupt them.

Application: Indicators of Non-State Actor Power in Africa

The students were able to validate their findings employing three different methods as well as different evidence sets and also assess their methodological validity. This kind of meta- sensemaking could constitute a bridge between now-traditional IC efforts and a revolutionary approach to building a sensemaking argument in official circles.

Of note is a remark by project supervisor Professor Wheaton: “The big advantage [of the multimethodological approach] was the ability to see similar patterns crop up again and again by looking at the data in different ways. This increased their [the students’] confidence enormously.”384 Additionally, given the temporal context (short) and the scale of the project (large) a multimethodological approach was perhaps the only means of tackling the problem.

Application: A Multi-Disciplinary Workshop on Non-State Actors

Participants used three frameworks to consider the environment within which the three groups exist and operate.

  • Points of segmentation are the boundaries or borders between and among groups of people, where the degree of disagreement on issues is indicated numerically.386 Points of segmentation can track inherent characteristics such as gender or ascribed cultural differentiators such as Sunni or Shiite. The set of points distinguishes one individual or group from another and identifies possible points of cooperation and conflict that can be exploited. Specific values for points of segmentation are derived from an expert assessment of the strength of the actors’ expressed attitudes, reinforced by observable behavior. They distinguish one individual or group from another, and identify the points most suitable for exploitation by the protagonist.
  • Prospect theory, originally developed by Kahneman and Tversky, posits that “people tend to be risk-preferring when facing long shot risks involving significant gains, such as betting on race horses, and are risk averse when facing significant losses: [in other words, when] buying a home or car insurance” respectively.”388
  • Institutional interactions is the name associated with a systematic model that allowed workshop participants to explore the complex roles non-state actors play as they influence (and are influenced by) overlapping institutional capabilities and needs.389 The participants concluded that even a simple model of institutional networks has enormous complexity—or high entropy—making it a good candidate for a subsequent in-depth modeling project. Due to imposed time constraints, development and application of the modeling was not completed.
  • Morphological Analysis was identified as an additional approach through the institutional interactions method. Morphological analysis considers an entire space of possible implications opening the way for follow-on disambiguation (perhaps using additional multimethodological approaches) in order to abductively and soundly derive the kind of judgments that become useful knowledge.390

Their discussions and modeling, however, supported the Warlords Rising thesis: that environment is a critical factor in understanding the emergence and roles of non-state actors.

Critical Assessment: Lessons Learned from the Study of Non-State Actors

No matter what the methodological approach, project participants emphasized that close attention to environmental factors remains a key to understanding non-state actors. Nonetheless, even those approaches that emphasized environmental factors fell prey to certain inadequacies.

Changes in the Roles of Non-State Actors: An Alternative View

A systematic review of what was done and not done in the three non-state actor studies provides insights into how critical thinking can com- bine with multimethodological, mindful sensemaking, to provide a paradigm for 21st Century intelligence creation and its active communication to policymakers in a fashion that transcends the Sherman Kent tradition. This review is facilitated by employing the ten elements of reasoning noted above.

  • Question: the beginning question of the NIC-Eurasia Group seminars was, “If non-state actors are emerging as a dominant global force, where is the evidence?” In other words, while there appears to be a consensus that they are a dominant global force, where is the formal evidence?
  • Purpose: Determine whether or not there is evidence that non-state actors are emerging as a dominant global force. This problem is one of basic research to determine if the evidence in fact exists. However, the underlying issue of how we might measure relative power must first be conceptualized and addressed.
  • Points of View: As we consider the original and complementary studies, there are two predominant points of view at issue: first, that of the NIC and its customers—who may believe that non-state actors are an emerging global force and want to quantify this shift in influence and power. The other, unavoidable point of view is that of non-state actors—some of whom would believe they are an emerging force and some who would not believe they are.
  • Assumptions: The use of the term “non-state actor” as an apparent all-encompassing term in the initial problem question and statement presumes an initial understanding and consensus about what is or is not a non-state actor. This is actually inaccurate as the differing foci of the three groups make clear. However, the differences in this case become evident in hindsight although measures could be taken in foresight to at least check the understanding of different groups engaged in collaborative assessments.

Greater precision of the term non-state actors is needed. Differentiating between benign and non-benign non-state actors is a first step. Subsequent refinements of “benign non-state actors” into non- governmental organizations, multinational corporations, and super- empowered individuals is also useful. A similar set of distinctions within the set of violent non-state actors is also necessary. Then, a crosscheck among the teams must be accomplished so that consensus on the meaning and use of these terms is achieved.

Another assumption involves what is meant by the term “dominant global force.” Again, both greater precision and clarity is needed in coping with this assumption. One key question is, “Exactly what does dominant global force mean?” One answer to this could be that everywhere on the planet non-state actors are the force affecting politics and life. Such a simplified and simplistic view is likely inaccurate, and a range of political process models—among them those of the “rational actor,” of “bureaucratic politics,” and of “organizational process,” need to be parsed.

  • Implications and Consequences: The consideration of implications and consequences means to anticipate and explore the events that follow a decision, and to put in play especially the interpretive aspect of sensemaking. In the context of non-state actors, it means to explore what happens if non-state actors are (or are not) emerging as dominant global forces and we are right or wrong about their power. Regardless of whether non-state actors are a dominant global force, if their influence is underestimated then surprises can be expected: Some non-state actor is likely to act in a fashion that is completely unexpected and with unanticipated results. On the other hand, overestimating the influence of non-state actors might create self-fulfilling prophecies. If, though, the influence of non-state actors is accurately measured it may be possible to mitigate that influence (where the non-state actors are acting on interests at odds with those of the United States). Alternately, where non-state actors are acting in consonance with the interests of the United States or are able to exploit opportunities put in place to get them to be helpful, the United States fulfills its goals.

Finally, the sensemakers’ interpretation of likely actions or events allows the implications and consequences of those actions to be considered, even if absolute prediction is elusive. Here, in the con- text of collaborative sensemaking through the communication of intelligence to a policymaker, we understand the admonition of Sherman’s Kent’s contemporary critic, Willmoore Kendall, that intelligence most critically “concerns the communication to the politically responsible laymen of the knowledge which…deter- mines the ‘pictures’ they have in their heads of the world to which their decisions relate.”392 This vision suggests communication of intelligence as an “insider” rather than offering “intelligence input” at arms length in the Kent paradigm.

  • Evidence: What evidence is needed to determine that non-state actors are, and as importantly, are not an emerging dominant global force? As we have seen, each group gathered and sifted consider- able information on non-state actors, some of it highly relevant to the central question and some not. To the best knowledge of the authors, each of the three groups chose and evaluated evidence only with inductive logic. They did not take advantage of a means, avail- able in particular to a community with robust intelligence capabilities, to deductively eliminate one of the two possibilities.
  • Inferences and Conclusions: With three different and independent efforts, the challenge lies in ensuring a useful triangulation of the results of those potentially disparate efforts. The Mercyhurst approach (internally triangulated) found that non-state actors, both legal and extralegal, are least effective in authoritarian states.

the Least Squares workshop demonstrated that within the context of either failed or failing states, expectations and perceptions of the public, or the political environment, are key drivers in anticipating the likelihood of actions by (violent) non-state actors. Strident or acrimonious expression of dissent that arises when domestic and international political/economic issues reinforce each other within the United States and Europe suggest a possible correlation in post-industrial states. This leads to a general conclusion that when expectations are at odds with situational reality, non-state actor activity increases.

  • Concepts: Not only the assumptions, but other concepts as well were in play at multiple levels in the non-state actor case studies. The very notions of “non-state actor” and the ideas of democracy, authoritarianism, and anarchy needed clarification, ideally through well-grounded, empirical as well as theoretical research, to ensure common understanding.
  • Alternatives: If non-state actors are not emerging as a dominant global force, then what can we say about their global role? Is their influence staying the same? Is it diminishing? Given a credible means of measuring change in the influence and power of non-state actors, the next step in this study of non-state actors would be to examine hypotheses generated from these alternative questions.
  • Context: As has been repeatedly noted, non-state actors present both a challenge to U.S. interests and an opportunity for advancing those interests. The U.S. would like to mitigate the challenges and take advantage of the opportunities. How to make that happen in domains and regions of little existing U.S. influence or of waning U.S. and Western influence becomes a key concern as the United States strives to carry out a meaningful global role. Future attempts to make sense of the role of non-state actors may benefit from tap- ping into the larger context of recent policy-relevant literature on the problem of fragile states in applied academic journals.

Moving Beyond a Proto-Revolution

Microcognition and Macrocognition in the Study of Non-State Actors

There emerge two very general domains of which intelligence professionals must make sense: that of the relatively static, state-based system and that of the much more dynamic non-state actor. Of course, these do not exist in isolation from one another. There are boundaries, interstices, and points of segmentation; there is considerable overlap when one usurps or adopts the actions of the other. Further, the separate domains of domestic and foreign areas of interest and action, embraced by the Kent model of intelligence creation and communication, have been superseded by an indivisible, world- wide web of personal and organizational relationships. Broadly speaking, the “classic” model of intelligence sensemaking largely sufficed and perhaps continues to suffice when issues remain clearly tied to the political entities associated with the Westphalian system of state-based power.

However when dealing with non-state actors, a new, revolutionary paradigm becomes essential for making sense of issues as well as their interactions with the states of the other paradigm. In the former, a traditional, intuitive and expert-supported approach was largely adequate. In the case of the latter, as is glimpsed in this case study, a more rigorous approach is required.

In national intelligence terms, practitioners and their customers work in a macrocognitive envi- ronment as they manage the uncertainty they face in dealing with wicked problems.

Macrocognition, then, includes a focus on process as well as results—what we have labeled mindful, self-reflective sensemaking.

Klein et alia observe that intelligence professionals and decision- makers traditionally are “microcognitively” focused. That is, like those who follow in the Sherman Kent tradition, they are concerned with solving puzzles, searching, and “estimating probabilities or uncertainty values” for different phenomena of interest.403 As has been discussed, such an approach still may be suitable for solving tame problems or those of the Type 1 domain. Thus, microcognition describes the reductionist foci of the current intelligence “analysis” paradigm. However, this is not sensemaking, which requires another approach.

The transition or shift to macrocognition requires a focus on “planning and problem detection, using leverage points to construct options and attention management.”

Elements of the foregoing case study exemplify this strategy. Both the Mercyhurst Role Spectrum Analysis and the Least Squares Points of Segmentation identified potential leverage points that revealed truths about non-state actors, leading to more robust problem detection. A next step would have been to take the triangulated results from all the deployed sensemaking methods and use the results to construct options for dealing with nonstate actors in multiple environments. Such a macrocognitive approach would allow more persistent attention to the anti ipation of the broad course of events (in this case involving non-state actors), in contrast to a microcognitive focus on predicting more isolated and specific future incidents.

Next Steps in Revolutionary Sensemaking about Non-State Actors

The foregoing elaboration of non-coordinated sensemaking activities, even with its limitations, moved beyond the traditional model of intelligence creation. It specifically identified the multiple approaches taken by independent teams who used alternative schema and methods that, perhaps unsurprisingly, resulted in a broader understanding of the problem.

Triangulation was largely informal both within and between the groups. Thus, the work met the criteria for a transitional intelligence sensemaking project. The participants in all three efforts engaged in critical thinking to one degree or another. All were also mindful of the wicked issue of non-state actors and its significance.

the adoption of the new paradigm for sensemaking depends on bringing into play a cooperative spirit of science and scientific inquiry to the process of intelligence creation and communication. Mindful, critical thinking-based, multimethodological approaches to analysis, synthesis and interpretation are one means of doing this. Additionally, a means needs to be found to ensure that this approach to sensemaking remains rigorous. This becomes the subject of the next chapter.

CHAPTER 8
Establishing Metrics of Rigor

Defining Intelligence Rigor

I know the distinction between inductive and deductive reasoning. An intelligence officer is inherently inductive. We begin with the particular and we draw generalized conclusions. Policymakers are generally deductive. They start with the vision or general principle and then apply it to specific situations. That creates a fascinating dynamic, when the intelligence guy, who I call the fact guy, has to have a conversation with the policymaker, who I tend to call the vision guy. You get into the same room, but you clearly come into the room from different doors. The task of the intelligence officer is to be true to his base, which is true to the facts, and yet at the same time be relevant to the policymaker and his vision. That’s a fairly narrow sweet spot, but the task of the intelligence officer is to operate in that spot.

— GEN Michael V. Hayden (Ret.), former Director of the Central Intelligence Agency and the National Security Agency

Michael Hayden’s view that the intelligence officer needs to operate in the “sweet spot” linking intelligence and policymaker cognitive worlds coincides with the aim of the sensemaking paradigm. To bring these two worlds together, intelligence professionals can take advantage of the opportunity to meld their fact-based inductive tendencies with the visionary, deductive model of policymakers through the application of collective rigor to well-conceived questions. This approach allows intelligence professionals to embrace a triangulation on wicked problems from their professional perspective, and to improve their chance to communicate with policymakers whose circumscribed comfort zone may accept or even welcome wicked problems as opportunities to apply their vision to bring about politically rewarding solutions.

At present most tradecraft for sensemaking triangulation remains intuitive, operating in the realm of tacit knowledge. Thus, part of a revolution in intelligence requires that more formal and explicit means of triangulation be developed. It may be that some existing analytic tradecraft, when conscientiously applied, will improve synthesis and interpretation. Another option is to explore and experiment with new tools for conceptualizing rigor in information analysis, synthesis and interpretation.

Rigor in sensemaking can refer to inflexible adherence to a process or, alternatively, to flexibility and adaptation “to highly dynamic environments.”408 As proponents of the latter approach, Daniel Zelik, Emily Patterson, and David Woods recently reframed the idea of rigor into a more manageable concept of “sufficiency.”409 In the applied world of sensemakers, then, an apt question is: “Were sufficient considerations made or precautions taken in the process of making sense of the issue?” Zelik et alia observe that this requires a “deliberate process of collecting data, reflecting upon it, and aggregating those findings into knowledge, understanding, and the potential for action.”410 In order to achieve answers to this question Zelik et alia developed an eight-element taxonomy of sufficiency and a trinomial measurement of rigor: Each element was calibrated in terms of high, medium or low rigor. In their examination of information products, an overall score could be computed that, in intelligence terms, would communicate to both the practitioner’s management and to consumers the rigor of the crafted intelligence product.

Attributes of the Rigor Metric

Hypothesis Exploration describes the extent to which multiple hypotheses were considered in explaining data. In a low-rigor process there is minimal weighing of alternatives. A high-rigor process, in contrast, involves broadening of the hypothesis set beyond an initial framing and incorporating multiple perspectives to identify the best, most probable explanations.

Information Search relates to the depth and breadth of the search process used in collecting data. A low rigor analysis process does not go beyond routine and readily available data sources, whereas a high rigor process attempts to exhaustively explore all data potentially available in the relevant sample space.

Information Validation details the levels at which information sources are corroborated and cross-validated. In a low-rigor process little effort is made to use converging evidence to verify source accuracy, while a high-rigor process includes a systematic approach for verifying information and, when possible, ensures the use of sources closest to the areas of interest.

Stance Analysis is the evaluation of data with the goal of identifying the stance or perspective of the source and placing it into a broader context of understanding. At the low-rigor level an analyst may notice a clear bias in a source, while a high-rigor process involves research into source backgrounds with the intent of gaining a more subtle understanding of how their perspective might influence their stance toward analysis- relevant issues.

Sensitivity Analysis considers the extent to which the analyst considers and understands the assumptions and limitations of their analysis. In a low-rigor process, explanations seem appropriate and valid on a surface level. In a high-rigor process the analyst employs a strategy to consider the strength of explanations if individual supporting sources were to prove invalid.

Specialist Collaboration describes the degree to which an analyst incorporates the perspectives of domain experts into their assessments. In a low-rigor process little effort is made to seek out such expertise, while in a high-rigor process the analyst has talked to, or may be, a leading expert in the key content areas of the analysis.

Information Synthesis refers to how far beyond simply collecting and listing data an analyst went in their process. In the low rigor process an analyst simply compiles the relevant information in a unified form, whereas a high- rigor process has extracted and integrated information with a thorough consideration of diverse interpretations of relevant data.

Explanation Critique is a different form of collaboration that captures how many different perspectives were incorporated in examining the primary hypotheses. In a low-rigor process, there is little use of other analysts to give input on explanation quality. In a high-rigor process peers and experts have examined the chain of reasoning and explicitly identified which inferences are stronger and which are weaker.

The tradecraft underlying these attributes assesses the domains of intelligence foraging and intelligence sensemaking (analyzing, synthesizing, and interpreting) as described here. The study by Zelik et alia reveals that what might at first glance be considered the application of a low level of rigor really is merely a function of a varying distribution of rigor applied among the attributes.

This distinction suggests a profound insight, namely that information search is perceived as more highly valued than information synthesis. In the case above, a definitive judgment about this insight cannot be made, as the other attributes of the two assessments were not identical. Still, it seems clear that at least the participants in the study were still wrestling with a consideration formally discussed by Richards Heuer, Jr. (and many others before and since—including Moore and Hoffman above): How much information is necessary for effective sensemaking?

Assessing Sensemaking Rigor in Studies of Non-State Actors

Rigor and the NIC-Eurasia Group Effort

The NIC-Eurasia Group effort (summarized in figure 12 and column one of table 7) garnered the fewest points of the three groups. Hypothesis Exploration — Low: The NIC-Eurasia Group memorandum noted that non-state actors are of interest “because they have international clout, but are often overlooked in geopolitical analysis.” The implicit but demanding questions of why and how much non-state actor “power” and “influence” have increased worldwide were not answered, nor was a time frame established. This failure to broaden the hypotheses beyond the initial framing of the issue led to a lack of incorporation of multiple perspectives to identify at least “best,” and perhaps most probable answers to these questions.

Rigor and the Mercyhurst Effort

The Mercyhurst students relied on published data. No evidence of consultation with external experts was evident. While this was to be expected given the demographics of a student team, it nevertheless led to a score of Low for the Specialist Collaboration metric.

Rigor and the LSS Effort

The LSS social science study of non-state actors scored the highest of all three groups, earning a high in each metric save one, Information Validation, where they scored a Medium. In this case, while converging information was employed to cross-validate source accuracy for the evidence closest to the areas of interest, a systematic approach for doing this was not evi- dent, resulting in the lower score.

Since the entire team was made up of specialists, their score in Specialist Collaboration should come as no surprise. Similarly, the involvement of diverse social scientists as well as the involvement of external peers foreordained that the Explanation Critique would be rigorous. All the individuals brought differing perspectives as they identified the strengths and weaknesses of each other’s inferences and conclusions. Finally, an explicit multi-methodological approach forced consideration of diverse interpretations of the evidence—a highly rigorous example of Information Synthesis.

Observations and Discussion

It is no accident that the traditional means by which assessments of such issues are created, as evidenced by the NIC-Eurasia Group effort, resulted in a relatively weak score, whereas the highly rigorous, critical-thinking based, multimethodological effort by a collaborative team of diverse experts led to a relatively high score (a comparison highlighted by figure 15).

Another advantage of graphic analysis using Zelik et alia’s metric is that more information can be clearly conveyed. For example, in examining the composite efforts of all three groups of participants in the non-state actor study, it is evident that Information Validation could have been improved through the use of a more rigorous systematic approach, ensuring that the sources were deemed valid and “closest to the areas of interest.”415

there appear to be several rea- sons why information is often not fully validated in intelligence work. First, validation is difficult and the intelligence professional may decide the result is not worth the effort, or that initial conditions suggest validity. “Information hubris”—arising when similar information has without negative con- sequence been presumed or found to be valid, may compound this effort. Wishful thinking and belief in the infallibility of the source are other factors that may contribute to this pathology. Finally, information uncertainty may allow it to resist validation. Unfortunately any or all of these can lead to intelligence errors and failures, suggesting that information validation, despite its inclusion in the rigor metric itself, may require a transcendent application of rigor.

In applying the metric it becomes apparent that some disambiguation is necessary between several of the individual considerations. The differences between high rigor assessments involving Stance Analysis and Sensitivity Analysis at first glance appear to be unusually subtle, suggesting a need for an explanatory critique as part of the standard process assessment. Additionally, because several of the specific metrics are process-related, assessors need to be present to observe the process or otherwise have access to appropriate and sometimes-scarce process-associated materials. Alternately, a formal means for capturing applied (and omitted) sensemaking processes needs to be developed.

In developing the rigor metric, Zelik et alia note that it is “grounded largely in the domain of intelligence analysis.”417 Looking at the metric from a generalizing point of view, Zelik et alia are interested in whether it can be broadened to other disciplines such “as information search by students working on educational projects, medical diagnosis using automated algorithms for x-ray interpretation, and accident investigation analyses.”418 For those of us within the domain of intelligence, however, that this model “emerged from studies of how experts ‘critique’ analysis (rather than how experts ‘perform’ analysis.” is a strength.419 The rigor metric has been empirically, if tentatively, shown (according to Daniel Zelik) to “reveal [some of the] “critical attributes to consider in judging analytical rigor” in intelligence sensemaking.420 In so doing, Zelik and his colleagues also validated the usefulness of the model within the intelligence domain.

A danger inherent in any process of making sense of an issue exists when the “process is prematurely concluded and is subsequently of inadequate depth relative to the demands of [the] situation.”

As Zelik et alia conclude, “the concept of analytical rigor in information analysis warrants continued exploration and diverse application as a macrocognitive measure of analytical sensemaking activity.”424 But what does it mean if such a model is adopted, or not adopted? The final chapter examines the implications of either outcome.

CHAPTER 9
In Search of Foresight: Implications, Limitations, and Conclusions

Considering Foresight

We turn in conclusion to a discussion of the purpose of mindful, critical sensemaking for intelligence. The discussion may be best framed by pertinent questions: To what end is intelligence intended? In other words, intelligence professionals and their overseers critically ask “knowledge of what?” and, secondly, “knowledge for whom?” One answer to these questions is embodied in the concept of foresight: Intelligence knowledge advises policymakers and decisionmakers about what phenomena are likely precursors of events of interest before they occur. Such foresight—in light of the discussions in this book—does not entail specific predictions. Rather, it allows us to anticipate a range of alternative event sequences.

Foresight informs policy and decisionmakers about what could happen so that those individuals can improve the quality of their decisions. Done mindfully, its vision shifts and evolves apace with the phenomena about which it makes sense. Done wisely in such a manner as presented here, it augments the vision of leaders, enabling mobilization and discouraging two traits that often handicap visionaries: recklessness and intolerance. Done rigorously, it cannot be accused of failing to be imaginative. This prospective approach contrasts with the current practice and paradigm for intelligence production.

In the Kent tradition, as has been noted (and is summarized in figure 16), intelligence knowledge of “analyzed” issues becomes and tends to remain disaggregated into constituent parts—oriented, as Treverton notes, toward solving isolated “puzzles” rather than the more holistic “mysteries” of the intelligence world. On the other hand, we have discussed how Kendall’s competing idea, that intelligence knowledge should “paint a picture” (by way of a macrocognitive, holistic approach) for the policy maker as a fellow “insider,” is consistent with a model of intelligence where the predominant method and motive of intelligence sensemaking is through aggregation and the articulation of a fact-based “vision” recognizable by national-level policymakers (figure 16). Even in an operational military scenario, where isolated, specific facts are essential to successful employment of mission knowledge, a larger intelligence sense of who the ultimate commanders are and why they are doing what they’re doing, remains the essence of useful, foresightful strategic knowledge—also known as national intelligence.

Implications

Creating intelligence as presented here is a mindful process of sensemaking, encompassing the activities of planning, foraging, marshaling, understanding, and communicating. It is critical of itself and the means that are employed in bringing it about. It lies within the largely overlooked Kendallian vision of what national intelligence ought to accomplish. This approach allows a focus on better and worse solutions, and anticipation of likely futures, instead of a more narrow focus on right and wrong answers in an intellectual environment trained on predictive and specific warning. It can make sense of wicked problems.

By contrast, intelligence as it is currently practiced is still somewhat akin to the practice of medicine in the 14th Century.

Intelligence practitioners find themselves in a similar situation. They often do not know why they do what they do, only that the last time, it “worked”—or that it is an “accepted practice.” They do not acknowledge that they have “forgotten” all the times it did not work. Yet, intelligence practitioners who would wear the “professional” label need to know what they are doing and why.427

One means set forth for “improving intelligence” is to capture the processes by which sense is made of an issue. It is certainly true that imposing audit trails is a critical step because they encourage process improvement in the light of serious errors, and stimulate repetitive analysis, synthesis, and interpretation for validation in the full course of sensemaking. However, auditing trails remain inadequate when the Community cannot understand from an epistemological point of view what does and does not work and in what situations.

The major intelligence failures of the first years of the present decade, as well as repeated failures over at least six decades, demonstrate what hap- pens when there is a formal failure to synthesize and interpret beyond what is popularly believed or even to recognize that a situation exists that requires new synthesis and interpretation. A popular hypothesis is that tradecraft can minimize the likelihood of such failures of imagination. Yet this hypothesis remains untested except in some anecdotal cases which, given the Type 2, wicked nature of the intelligence issues now often faced by the Community, is inadequate.

Limitations

It should be noted that a tradecraft of mindful understanding does not guarantee accurate findings. Any of the components of sensemaking can be done poorly yet “correct” answers can be reached. Disaggregating phenomena can be done well yet yield faulty results. Synthesis and interpretation of analyzed phenomena can still lead to faulty conclusions. However, analysis, synthesis and interpretation within the framework of appropriately applied, multimethod tradecraft does guarantee more rigorous sensemaking.

Conclusions

As has been noted repeatedly in this book, many 21st-Century intelligence issues are wicked problems: They are ill-defined and poorly understood issues with multiple goals that must be made sense of within severe time constraints; the stakes and risks are high and there exists no tolerance for failure. As a means of increasing situational awareness, merely creating mindfulness about such complex issues falls short. On the other hand, a mindful sense- making approach to situational awareness accomplishes more by enabling the intense, holistic scrutiny of a complex developing scenario, as suggested in the case study in chapter 7. This macrocognitive approach ensures that the knowledge created also evolves.

If intelligence is to rise above the noise and get the attention of policy and then be acted upon it must be both. A critical, mindful process of sensemaking offers a means for this to occur. As we have seen, it covers the issue broadly, takes into account its complexity, is systematic and rigorous. It offers the best means currently understood for making sense of what is known and knowable.

Aggressive, mindful sensemaking is one path- way to this new paradigm, and may require a different mix of skills and abilities than is currently present. It certainly requires greater, authentic diversity. Considering the present community, one is reminded of Kent’s quip, “When an intelligence staff has been screened through [too fine a mesh], its members will be as alike as tiles on a bathroom floor—and about as capable of meaningful and original thought.”439 In contrast, making sense of the 21st Century’s intelligence challenges requires as much rigorous, “meaningful and original thought” as we can muster. Sensemaking, as it has been developed here, offers us a means of creating that desperately needed thought.

Review of Irresistible Revolution: Marxism’s Coal of Conquest & The Unmaking of the American Military

Irresistible Revolution: Marxism’s Goal of Conquest & the Unmaking of the American Military was written by a former lieutenant colonel in the United State Space Force, Matthew Lohmeier, and published in 2021. The book is, roughly, three-fifths about the history of Marxism as an intellectual project and two-fifths about the author’s personal experiences in the U.S. Military

The opening chapter, Transforming American History, provides a brief account of the controversies and figures involved in the 1619 Project and the 1776 Commission in the context of cultural war. There is a struggle over the meaning of America presently underway that is at the heart of a social and political polarization that threatens to permanently fracture American civil society. Lohmeier describes it as a contest over the meaning of America highlighting how civil rights icons Frederick Douglass and Martin Luther King, Jr. recognized the Declaration of Independence and the Constitution as “deep wells of democracy” with the Marxist view which holds these documents in contempt. This is framed via reference to George Orwell and the power one gains via dictation over official truth and a broader account of slavery is briefly touched upon, highlighting how president of the National Association of Scholars Peter W. Wood’s historical research shows slavery was a worldwide phenomenon as early as the 14th century – far before the 1619 period cited by the 1619 Project.

In Chapter Two, America’s Founding Philosophy, Lormeier provides a historiography of American political economy as embodied in the Founder’s ideology that is instructive in highlighting how, in contrast to the claims made by those on the left, that women and blacks were never conceived of us being innately inferior but inherently equal as human beings but historically unequal due to the conditions of the society. The Declaration of Independence and the Constitution were enablers of the means by which historically unequal groups could achieve legal equality. Their Transitional Program then, to borrow a Marxian turn of phrase, was not delineated in a strategic set of actions to be taken to achieve a quasi-utopian outcome but by their ability to reference to the Declaration, the Constitution and legal literature to enable to legitimize what was already granted to them “by God”.

In chapter 3, Marxism’s Goal of Conquest, Lohmeier shares knowledge he learned while taking courses within the U.S. military to give context to the schools of thought in Europe that Marxism developed from. He explains how the writings of a wide-number of political conspiracists positively valued collectivism, which made it fundamentally difference from the individual rights framework of the Constitution. In relating their communalist concepts to historical precedents, such as the Cultural Revolution in China, wherein ‘forced equality’ became a government mandate it becomes appear how this collectivist approach which holds no respect for individual rights becomes a cudgel to legitimize and legalize sorts of abuses. Marx’s relationship to Hegel and the secret social orders in not-yet-united regions of Germany organized against the Kings and Princes are described, as was the view that Universal Revolution – one which inverted the current system of values in the home, economy and nation – was necessary and how the most strategic way to achieve this was through corruption and unseen influence.

In Chapter 4, Marx, Marxism and Revolution, Lohmeier provides a more thorough biographical account of Marx as well as focused analysis of Marx’s writing. From the context provided in the previous chapter we see the how writings and actions of Marx, those that influenced his thought, and his contemporary comrades in political arms all viewed the individual with disdain. History is class struggle, nothing more, and the bourgeoisie that prevents the dictatorship of the proletariat from being enacted are akin to devils keeping man from reaching heaven. A few examples of how the Communist Manifesto has been used by practitioners such as Lenin and Mao to justify atrocities are cited – but the majority of the chapter is devoted to explicating in detail the rhetorical and political innovations of Marxism. Marxism is a totalitarian legitimization of social destruction and replacement with something that – due to it’s “collectiveness” – is claimed to be an improvement.

In Chapter 5, Marx’s Many Faces, provides several historic accounts that highlight how Communist societies have treated outsiders, examples of communist infiltration into social bodies, and modern examples of this collectivist language making its way into U.S. institutions. One such example of the first category is the processes that American service-members had to go experience following capture in the Korean and Vietnamese War – prolonged, tortuous interrogation combined with efforts to indoctrinate. An example of the middle category – subversions – is that of William Montgomery Brown’s entrance at the behest of the Communist Party into the Episcopal church and his training to become a Bishop.  Such efforts, as described in Color, Communism, & Common Sense, were coordinated at a national level with guidance at the international level from the Kremlin in Soviet Russia. An example of the last is reflected in Critical Race Theory, which is shown not only to have many of the same rhetorical and political elements as that of Marxist thought – but that many of it’s early developers and current advocated openly avow such a worldview.

In Chapter 6, The New American Military Culture, Lohmeier’s descriptions on how Diversity, Equity, and Inclusion (DEI) concepts and practices have been made necessary components of armed services training and how policies linked to them have affected combat readiness is insightful. He focuses on the way in which normative social values are promoted via mandatory training that teaches an iteration of ‘anti-racism’ that functions to smuggle in Marxist, revolutionary values. He highlights how how “Servicemembers are allowed to support the BLM movement. They are not, however, allowed to criticize it.” (Lohmeier 121). As a personal account of the impact of such training on troop morale, retention rates and the way that political controversy has come to inform hiring, firing and promotions the book is insightful. It’s impact on morale (teaching as it does that people within a racially diverse unit represent oppressors and oppressed), professionalism (teaching as it does that people from historically oppressed groups should be ascend professionally because of that rather than traditional metrics of merit, and combat readiness (teaching as it does that the U.S. is an immoral country), and other factors is shown to be real and concerning. Accounts are shared, for example, about how cadets cite how the transformation of the merit-based system that they elected to join becoming a racialized organization as the reason for their decision not to re-enlist; about lieutenant colonels adopting the language of radical extremism and saying that if elections don’t go his way the system should be “burned to the ground”; and lowered recruitment numbers amongst other examples. Citing a 40-page June 25, 2020 policy proposal written by officers commissioned at West Point, he shows how the new demands for “racial inclusion” – influenced by Robin DiAngelo and Ibram X. Kendi – was nothing more than Marxism using racial language and mirrored more the Port Huron Statement than a document written by those supposedly trained to understand American values, i.e. individualism and meritocracy.

Chapter 7 closes the books primarily with comparing the contemporary U.S. context, to historical precedents for the type of ideological warfare now running unchecked, from the Civil War in Yugoslavia to the actions of the Red Guards following the Communist Party’s capture of China. Lohmeier highlights examples of laws advocated by the Democratic Socialists of America – whose worldview is influenced by Marxism – as well as interpretations of historic events such as the January 6th Protests at the U.S. Capital Building.

As a whole, my primary criticism of Lohmeier’s book is in the descriptions of actors and networks in the U.S. that are currently involved in political and ideological activism. In the first chapter, for example, he describes how (1) materials written by an author (Hannah-Jones) that had received a fellowship to study in Cuba, (2) produced by a foundation started by Howard Zinn and (3) promoted by a group whose roots trace to the United States Social Forum made their way into a suggested reading list for high school students and enlisted personnel. And yet there is no mention of the fact that Zinn was a founding member of the Cuban and Venezuelan-directed Networks of Artists and Intellectuals in Defense of Humanity, nor the relationship of Black Lives Matter’s founders, executives and elders’ relationship to the World Social Forum and the Cuban and Venezuelan. Because of this lack of intelligence-based analysis, amorphous descriptions of the groups involved in the force as being a “potent cultural force” seem to be merely creatures of individual choices responding to national issues even though they are not.

Lohmeier’s focus of the book primarily being on “Marxism’s Goal of Conquest” – however – this is understandable. Assessed from this vantage point the book is a success – though I do wish that more focus would have been given on examples of how DEI/crypto-Marxism has impacted U.S. military culture.

Review of Global Democracy and the World Social Forums

Global Democracy and the World Social Forums (International Studies Intensives) 

by Jackie Smith, Marina Karides, Marc Becker, Dorval Brunelle, Christopher Chase-Dunn, Donatella Della Porta

Chapter One

Globalization and the Emergence of the World Social Forums

(2)

The WSF process – by which we mean the networked, repeated, interconnected, and multilevel gatherings of diverse groups of people around the aim of bringing about a more just and humane world – and the possibilities and challenges this process holds.

Civil society has been largely shut out of the process of planning an increasingly powerful global economy.

(3)

The WSF has become an important, but certainly not the only, focal point for the global justice movement. It is a setting where activists can meet their counterparts from other parts of the world, expand their understandings of globalization and of the interdependencies among the world’s people, and plan joint campaigns to promote their common aims. It allows people to actively debate proposals for organizing global policy while nurturing values of tolerance, equality, and participation. And it has generated some common ideas about other visions for a better world. Unlike the WEF, the activities of the WSF are crucial to cultivating a foundation for a more democratic global economic and political order.

(4) WSF seeks to develop a transnational political identity

The WSF not only fosters networking among activists from different places, but it also plays a critical role in supporting what might be called a transnational counterpublic. Democracy requires public spaces for the articulation of different interests and visions of desirable futures. If we are to have a more democratic global system, we need to enable more citizens to become active participants in global policy discussions.

The WSF… also provides routine contact among the countless individuals and organizations working to address common grievances against global economic and political structures. This contact is essential for helping activists shar analyses and coordinate strategies, but it is also indispensable as a means of reaffirming a common commitment to and vision of “another world,” especially when day-to-day struggles often dampen such hope. Isolated groups lack information and creative input needed to innovate and adapt their strategies. IN the face of repression, exclusion, and ignorance, this transnational solidarity helps energize those who challenge the structures of global capitalism.

Aided by the Internet and an increasingly dense web of transnational citizens networks, the WSF and its regional and local counterparts dramatize the unity among diverse local struggles and encourage coordination among activists working at local, national and transnational levels.

(7) WSF values protests over permanent deliberative political bodies

Depoliticization is driven by the belief that democracy muddles leadership and economic efficiency. This crisis of democracy is reflected in the proliferation of public protests and other forms of citizen political participation, which are seen by the neoliberals as resulting from excessive citizen participation in democracy

(8) Reports claiming democratic crisis

The crisis of democracy was a diagnosis developed by political and economic elites in the 1970s, a time when the WEF was first launched. Two reports had a profound impact on how governments came to refine their relations with their citizens and social organizations in the ensuing years. The first was a report made to the Trilateral Commission in 1975 [The Crisis of Democracy: Report on the Governability of Democracies to the Trilateral Commission] and the second was a 1995 Commission on Global Governance Report.

(10-11) Pre-WSF summits propose radical recommendations to empower NGOs via UN

The growing participation of civil society organizations in UN-sponsored conferences reflected the need for some form of global governance in an increasingly interlinked global economy.

…it was the second Earth Summit in 1992 that revealed the difficulties besetting world governance and eventually led to the Commission on Global Governance. The commission report, Our Global Neighborhood, acknowledged that national governments had become less and less able to deal with a growing array of global problems. It argued that the international system should be renewed for three basic reasons: to weave a tighter fabric of international norms, to expand the rule of law worldwide, and to enable citizens to exert their democratic influence on global processes. To reach these goals, the commission proposed a set of “radical” recommendations, most notably the reform and expansion of the UN Security Council, the replacement of ECOSOC by and Economic Security Council (ESC), and an annual meeting of a Forum of Civil Society that would allow the people and their organizations, as part of “an international civil society,” to play a larger role in addressing global concerns

(12) WSF on the need to influence transnational corporations

The report also stated that global governance cannot rest on governments or public sector activity alone, but should rely on transnational corporations – which “account for a substantial and growing slice of economic activity.

(13) WSF as a form of Revolutionary Rupture

The WSF… grows from the work of many people throughout history working to advance a just and equitable global order. In this sense, it constitutes a new body politic, a common public space where previously excluded voices can speak and act in plurality. …we propose to see the WSF not as the logical consequence of global capitalism but rather as the foundation for a new form of politics that breaks with the historical sequence of events that led to the dominance of neoliberal globalization.

(14) WSF Precursors were a Communist Uprising and Anarchist/Communist Direct Action

The WSF is a culmination of political actions for social justice, peace, human rights, labor rights, and ecological preservation that resist neoliberal globalization and its attempts to depoliticize the world’s citizens.

More than any other global actions or transnational networking, the Zapatista uprising in Chiapas, beginning January 1, 1994, and the anti-WTO protests in Seattle in November 1999 were perhaps the most direct precursors to the WSF.

(16) Convergence of interests between Western Environmentalists and Unionists

…in the global north, or the rich Western countries, citizens were organizing around a growing number of environmental problems. Environmentalists and unionists joined forces with each other, and across nations, to contest proposed international free trade agreements, such as the North American Free Trade Agreement (NAFTA) and the Multilateral Agreement on Investment.

(17) Rapid rise in the number of Transnational Organizations

…between the early 1970s and the late 1990s, the number of transnationally organized social change groups rose from less than 200 to nearly 1,000…

Growth of a Transnational Political Identity, Erosion of National Identity

These groups were not only building their own memberships, but they were also forging relationships with other nongovernmental actors and with international agencies, including the United Nations. In the process, they nurtured transnational identities and a broader world culture.

(18) National economic and political agreements supersedes the UN

Many environmental and human rights agreements were being superseded by the WTO, which was formed in 1994 and which privileged international trade law over other international agreements. Agreements made in the UN were thus made irrelevant by the new global trade order, in which increasingly powerful transnational corporations held sway.

(19) Organizational mode originated in Latin America

The model of the “encuentro,” a meeting that is organized around a collectivity of interests without hierarchy, on which the Zapatistas and later the WSF process built, emerged from transnational feminist organizing in Latin America.

The events of Chiapas and Seattle reflect not simply resistance to globalized capitalism, but rather they were catalysts to a new political dynamic within the global landscape.

Chapter Two: What are the World Social Forums?

(27) WSF is an Organizational Apparatus which claims to be without Leadership

The WSF was put forward as an “open space” for exchanging ideas, resources, and information; building networks and alliances; and promoting concrete alternatives to neoliberal globalization. Both open space and networks are organizational concepts used by the global justice movement to ensure more equitable participation than occurs, for instance, in traditional political parties and unions.

The WSF process emphasizes “horizontality,” to increase opportunities for grassroots participation among members rather than promote “vertical” integration were decisions are made at the top and reverberate down.

(28) WSF as a Segmented, Polycentric, Ideological Networks

Since its first meeting in Porto Alegre, Brazil, in 2001, the WSF has reflected a networking logic prevalent within contemporary social movements and global justice movements in particular. Facilitated by new information technologies, and inspired by earlier Zapatista solidarity activism and anti-free trade campaign, global justice movements emerged through the rapid proliferation of decentralized network forms. New Social Movements (NSM) theorists have long argued that in contrast to the centralized, vertically integrated, working-class movements, newer feminist, ecological, and student movements are organized around flexible, dispersed, and horizontal networks.

(29) Technology links Transnational Activist Planning and Actions

New information technologies have significantly enhanced the most radically decentralized network configurations, facilitating transnational coordination and communication.

Rather than recruiting, the objective becomes horizontal expansion through articulated diverse movements within flexible structures that facilitate coordination and communication.

Self-Identification as Networking Organizations

Considerable evidence of this cultural logic of networking is found among organizations participating in the European Social Forum (ESF). About 80 percent of sampled organizations mention collaboration and networking with other national and transnational organizations as a main raison d’etre of their groups. Many groups also emphasize the importance of collaboration with groups working on different issues but sharing the same values. Some groups even refer to themselves as network organizations.

Furthermore, research at the University of California- Riverside found that “networking” as a reason for attending the 2005 WSF was more common among nonlocal respondents (60 percent) than Brazilian respondents.

(30) Social Movement Interaction defines Identity

Struggles within and among different movement networks shape how specific networks are produced, how they develop, and how they relate to others within broader social movement fields.

(31) WSF provided Marxists opportunity to network with NGOs

The WSF provided an opportunity for the traditional left (verticals), including many reformists, Marxists, and Trotskyists, to claim a leadership role within an emerging global protest movement…

(32) The WSF process conceived of as a decentralized organization

The WSF is not an actor that develops its own programs or strategies, but rather it provides and infrastructure within which groups, movements, and networks of like mind can come together, share ideas and experiences, and build their own proposals or platforms for action. The forum thus helps actors come together across their differences, while facilitating the free and open flow of information.

(35) The WSF is NOT a decentralized organization

A system and hierarchy persists within the forum itself.

(38) The WSF is NOT a decentralized organization

The WSF does have its pyramids of power. April Biccum, for example, contends that it would be naïve to assume “that the open space is space without struggle, devoid of politics and power.

Many of the grassroots activists have criticized the International Council, as well as local and regional organizing committees, for acting precisely as a closed space of representation and power, limited to certain prominent international organizations and networks with access to information and sufficient resources to travel.

Chapter Three

Who Participates in the World Social Forums?

(49) Participatory Democracy

Social change requires that civil society influence other social actors, such as political parties and government officials, and therefore others believe that involving such actors in the WSF is essential.

(54) Network composition

A majority of WSF participants are intellectuals and professionals – 36%.

(56) WSF as a long term strategy for political change

The WSF’s desire and demand for a greater role for civil society in economic decisionmaking cannot quickly overcome the structural barriers that the global economy places on workers.

(62) Twenty percent of attendees in the Socialist/Communist movement

The majority of respondents in this sample expressed support for abolishing and replacing capitalism… 14 percent identified as active in the socialist movement, five percent claimed active involvement in the communist movement, and 3 percent in the anarchist movement.

(64) Political Parties play a role in the Party

Despite their formal exclusion from the WSF, representative of political parties and governments have played important and visible roles within it as well as local and regional forums. Some WSF participants are also beginning to explore issues surrounding the formation of global political parties.

(69) Latin American Leftist Parties Play Supporting Role in the WSF

Depending on the political context, parties can have very different roles and relationships within the social forum process. In many South American countries, including Argentina, Brazil, Venezuela, Chile, Uruguay, and Bolivia, leftist parties are on the rise, and many activists see them as vehicles for social change, even though these activists might be critical of the parties policies.

(70) WSF and link to Brazilian Communist Party

As the WSF grew, the necessity of its reliance on traditional state structures became increasingly apparent.

Lula announced that he was leaving the World Economic Forum “to demonstrate that another world is possible; Davos must listen to Porto Alegre.”

(72) Hugo Chavez’s positive assessment of the WSF

Chavez consciously contrasted his reception, his political positions, and the situation in Venezuela with those of his last visit in 2003. He also spoke of his dream for a unified Latin America, now made more real with leftist presidents Nestor Kirchner and Tabare Vasquez in power in Argentina and Uruguay. Chavez noted that the WSF was the most important political event in the world. Venezuelans, he noted, “are here to learn from other experiments.”

(73) WSF Advocates seek to link it to political parties

Other activists also argue for a closer relationship between social movements and political parties. According to Chris Nineham and Alex Callinicos, “it was a mistake to impose a ban on parties, since political organizations are inextricably intermingled with social movements and articulate different strategies and visions that are a legitimate contribution to the debates that take place in the social forum”. Bernard Cassen, a promoted of the WSF, argues pointedly: “We can no longer afford the luxury of preserving a wall between elected representative and social movements if they share the same global objectives of resisting neoliberalism. With due respect for the autonomy of the parties involved, such wide cooperation should become a central objective of the Forums.”

(74-75) WSF as an Incubator for a Global [Socialist] Party

Because of the negative connotations associated with political parties, some activists and intellectuals seeking to challenge the power of global governance institutions are calling for the creation and spread of new kinds of political agency, such as “political cooperative” or “political instruments”. Yet, political parties can take multiple forms and it is important not to reproduce false dichotomies between civil society organizations and parties given the historically variable and complex interrelationships between them.

…many activists in other parts of the world have actively contributed to the rise of socialist parties, and worked both inside and outside of electoral and legislative arenas to pursue social change.

Horizontally organized groups who are coordinating their political activities transnationally can be thought of as global networks, or even as world parties.

Since the nineteenth century, nonelites have organized world parties.

The contemporary efforts by activists to overcome cultural differences; deal with potential and actual contradictory interests among workers, women, environmentalists, consumers, and indigenous peoples’ and solve other problems of the north and south need to be informed by both the failures and the successes of these earlier struggles.

…the forum process does not preclude subgroups from organizing new political instruments, and there seems to be an increasing tendency for more structured and coordinated global initiatives to emerge from the forum process.

(76) WSF as a means for preparing activists to influence political institutions

While there may have been a number of criticisms made of the WST by activists, many see the WSF as an important instrument for preparing the public to participate actively within, and influence the decisions of, such institutions.

Smith argues that the WSF is a “foundation for a more democratic global polity,” since it enables citizens of many countries to develop shared values and preferences, to refine their analysis and strategies, and to improve their skills at international dialogue.

Chapter Four

Reformism or Radical Change: What Do World Social Forum Participants Want?

(79) Forum functions to bring together anti-capitalists

Despite disagreement on many issues, most participants share certain fundamental points in common, most notably the desire to help people take back democratic control over their daily lives. Whether this should happen through the destruction of the capitalist system, as radicals would argue, or through regulating the global economy, as reformists would content is a matter of intense debate.

(81-84) Four types of WSF participants

We divide forum participants into four political sectors: institutional movements, traditional leftists, network-based movements, and autonomous movements. These categories help provide a road map, but in practice the dictions we make are more fluid and dynamic than presented here.

Institutional actors operate within formal democratic structures, aiming to establish social democracy or socialism at the national or global level. This sector primarily involved political parties, unions, and large NGOs, which are generally reformist in political orientation, vertically structured, and characterized by representative forms of participation, including elected leaders, voting, and membership.

Traditional leftists include various tendencies on the radical left, including traditional Marxists and Trotskyists, who identify as anti-capitalist, but tend to organize within vertical organizations where elected leaders rather than a wider range of members make organizational decisions.

Network-Based Movements involves grassroots activists associated with decentralized, direct action-oriented networks… [and] are often allied with popular movement in both the global north and south that have a strong base among poor people and people of color.

Autonomous movements mainly emphasize local struggles. They also engage in transnational networking, but their primary focus is on local self-management. …[They] stress alternatives based on self-management and directly democratic decision-making. Projects include squatting land and abandoned buildings, grassroots food production, alternative currency systems, and creating horizontal networks of exchange.

(85) Majority of WSF want to Abolish Capitalism Rather than Reform It

Results from the 2005 University of California-Riverside survey indicate that a majority of, or 54 percent of, all respondents expressed the belief that capitalism should be abolished and replaced with a better system, rather than reformed.

(87) UN as a Constitutional Model to be Adopted by Revolutionaries

For many, the United Nations could provide an alternative institutional arrangement that is more democratic and more responsive to the needs of local governments and communities around the world… In this sense, current global political and economic institutions would be abolished and replaced with new kinds of thoroughly democratized institutions.

(89) WSF attendees desire a democratic world government

According to the survey of participants at the 2005 WSF, the majority of respondents (68 percent) think that a democratic world government would be a good idea.

Chapter Five

Global or Local: Where’s the Action

(108) WSF as Coordinator for Transborder Activism

Shortly after the first WSF in 2001, activists working at local and regional levels began organizing parallel forums that made explicit connections to the WSF process.

The networking among activists and the shared understandings that arise from transborder communications and a history of transnational campaigning allow action taking place at multiple sites and scales to contribute to a more or less harmonious global performance.

(115) WSF forums a small part of activism

Although activists actually gather for just a few days, they work together for many months prior to the event, and many also following up their participation in the WSF process by launching new campaigns, developing new organizational strategies, or joining new coalitions. The real action of the social forums is not in the meeting spaces themselves – although these gatherings are very important. Rather, the WSF process ripples into the ongoing activities of the individuals and organizers are part of the process.

(116) WSF enables Collaboration for Transnational Actions

As people gain more experience with the forums, they have learned to make better use of the networking possibilities therein. Indeed, more people are beginning to use the process to launch new and more effective conversations and brainstorming sessions about how to improve popular mobilizing for a more just and peaceful world.

(123) WSF as a process of idea dissemination

The process itself represents a collection of political and economic activities that have much broader and deeper significance. Understanding the impacts of the process requires that we consider the wider effects of the formulation of new relationships at forum events. It also requires that we see how the new ideas are dispersed within a context that supports and celebrates the unity in core values among the diverse array of forum participants.

(128) WSF helps strengthen transnational networks

Problematically, existing political systems provide no real space for citizens to engage in thoughtful and informed debate about how the global political and economic system is organized.

The WSF process contributes to the strengthening of transnational networks of activists that allow the stories of people in different countries to flow freely across highly diverse groups of people.

(129) WSF as site of knowledge transformation and network growth

By coming together in spaces that are largely autonomous from governments and international institutions such as the UN, activists have helped foster experimentation in new forms of global democracy, encouraging the development of skills, analyses, and identities that are essential to a democratic global policy… These activities model a vision of the world that many activists in global justice movements hope to spread.

Chapter Six

Conclusion: The World Social Forum Process and Global Democracy

 

(133)

The WSF process seems to have responded to these tensions by becoming what might be called a form of polycentric governance, or a transborder political body with an organizational architecture that remains fluid, decentralized, and ever evolving…

Because the WSF is a process rather than an organization or an event, it is by intention malleable in ways other international bodies, like the United Nations, are not.

About the Authors

Jackie Smith is Professor, Department of Sociology at the University of Pittsburg. She is also the editor of Journal of World-Systems Research. Smith’s most recent books include Social Movements in the World-System: The Politics of Crisis and Transformation, with Dawn Wiest; Handbook on World Social Forum Activism, co-edited with Scott Byrd, Ellen Reese, & Elizabeth Smythe; Globalization, Social Movements and Peacebuilding, co-edited with Ernesto Verdeja, and Social Movements for Global Democracy (2008).

Marina Karides is assistant professor of sociology at Florida Atlantic University. She is an active participant in the World Social Forums and Sociologists Without Borders. Her recent work considers gendered dimensions of globalization and the global justice movement. She has published articles in Social ProblemsSocial Development Issues, and International Sociology and Social Policy and multiple chapters that critically examine microenterprise development and the plight of informally self-employed persons in the global south. She is currently writing a book on street vendors and spacial rights in the global economy.

Marc Becker teaches Latin American History at Truman State University. His research focuses on constructions of race, class, and gender within popular movements in the South American Andes. He has a forthcoming book on the history of indigenous movements in twentieth-century Ecuador. He is an Organizing Committee member of the Midwest Social Forum (MWSF), a Steering Committee member and web editor for Historians Against the War (HAW), and a member of the Network Institute for Global Democratization (NIGD).

Christopher Chase-Dunn is Distinguished Professor of Sociology and Director of the Institute for Research on World-Systems at the University of California–Riverside. Chase-Dunn is the founder and former editor of the Journal of World-Systems Research and author most recently of Social Change: Globalization from the Stone Age to the Present (Paradigm 2013).

Donatella della Porta is professor of sociology at the European University Institute. Among her recent publications are Globalization from Below (2006); Quale Europa? Europeizzazione, identita e conflitti (2006); Social Movements: An Introduction, Second Edition (2006); and Transnational Protest and Global Activism.