Notes from The War of All the People: The Nexus of Latin American Radicalism and Middle Eastern Terrorism

The War of All the People: The Nexus of Latin American Radicalism and Middle Eastern Terrorism

 

by Jon B. Perdue, Stephen Johnson

Jon B. Perdue is the author of The War of All the People: The Nexus of Latin American Radicalism and Middle Eastern Terrorism, published by Potomac Books in August 2012. Mr. Perdue was also the editor and wrote the foreword to the book Rethinking the Reset Button: Understanding Contemporary Russian Foreign Policy by former Soviet Central Committee member and defector Evgeni Novikov. He also contributed a chapter to the book Iran’s Strategic Penetration of Latin America (Lexington Books, 2014).

Perdue also serves as an instructor and lecturer on peripheral asymmetric warfare, strategic communication and counterterrorism strategy. He is credited with coining the term “preclusionary engagement,” a strategy of counterterrorism that focuses on combined, small-unit operations that can be conducted with a much smaller footprint prior to or in the early stages of conflict against a threatening enemy, in order to preclude the necessity of much larger operations, which are far more difficult in terms of costs and casualties, once the conflict has escalated due to the lack of a forceful resistance.

Mr. Perdue’s articles have been published in the Washington Times, Investor’s Business Daily, the Miami Herald, the Atlanta Journal-Constitution and a number of newspapers in Latin America. Perdue served as an international election observer in the historic elections in Honduras in 2009 and as an expert witness in a precedent-setting human rights trial in Miami-Dade Circuit Court in 2010. He has served as a security analyst for NTN24, a Latin America-based satellite news channel, and CCTV, a 24-hour English-language news channel based in China.

For most of the past decade Mr. Perdue has served as the Director of Latin America programs for the Fund for American Studies in Washington, DC, and as a Senior Fellow for the Center for a Secure Free Society. He also serves on the boards of the Americas Forum in Washington, DC and the Fundación Democracia y Mercado in Santiago, Chile. He has worked unofficially on three presidential campaigns, contributing foreign policy and counterterrorism policy advice.

Preface

As Edward Gibbon hypothesized despite its greatness and the quantum leap in human achievement and prosperity that it wrought, Rome fell after being pushed – but it requires little force to topple what had already been hollowed from within. Rome fell when Romans lost the desire and the ability to defend it.

The American republic has survived the buffeting winds of war and governmental caprice to stand as the sole remaining superpower. Its principal threat is no longer from rival nation-states but from a multitude of smaller subversions.

As the military strategist Bernard Brodie noted, “good strategy presumes good anthropology and sociology. Some of the greatest military blinders of all time have resulted from juvenile evaluations in this department.

What still challenges the United States today is the pervasive lack of seriousness that prevents those agencies tasked with defending the homeland from being able to even name the enemy that we face. It illustrates a failure of will to claim the legitimacy that we have sacrificed so much to attain and an infections self-consciousness that has no basis in realpolitik.

More than any failed strategy or improper foreign policy, it is this American self-consciousness that is the topsoil for the growth of anti-American terrorism worldwide.

It is foolhardy to allow our enemies to paralyze our will to fight by defining American foreign policy as some new form of imperialism or hegemony. The desire for human freedom, lamentably, is not an expansionist impulse.

Introduction

“The War of All the People” is the doctrine of asymmetrical and political warfare that has been declared against the United States, Western civilization, and most of the generally accepted tenants of modernity. At its helm today are Hugo Chavez of Venezuela and Mahmoud Ahmadinejad of Iran – two self-described “revolutionary” leaders hell-bent on the destruction of capitalism and what they call “U.S. hegemony” throughout the world.

In October 2007 the two announced the creation of a “global progressive front” in the first of a series of joint projects designed to showcase “the ideological kinship of the left and revolutionary Islam.” Ahmadinejad would promote the theme on state visits to Venezuela, Nicaragua, and Bolivia, highlighting what he called “the divine aspect of revolutionary war”.

Declaring his own war against “imperialism,” Chavez aims to supplant U.S. dominance in the hemisphere with so-called 21st Century Socialism.

(2)

The Castro regime adopted the War of All the People doctrine from Viet Minh general Vo Nguyen Giap, who began publishing the military theories of Ho Chi Ming along with his own (much of it adapted from the theories of Mao Zedong) in the 1960s.

Giap’s most thorough examination of the tenets of a “people’s war” was put forth in his book To Arm the Revolutionary Masses: To Build the People’s Army, published in 1975.

(8)

What makes the current threat different is its stealthy, asymmetrical nature. The doctrine has been adapted to avoid the missteps made during the days of Soviet expansionism and has instead focused on the asymmetrical advantages that unfree states enjoy over free ones. While the United States enjoys a free press, it has no equivalent to the now-globalized state-run propaganda operations that unfree states utilize to attack the legitimacy of free ones.

…oil-rich states like Venezuela and Libya have been able to leverage their petrodollars to buy influence in those organizations and by corrupting weaker states to do their bidding on the world stage. These regimes have also formed new alliances around “revolutionary” and “anti-imperialist” ideology in order to coordinate their efforts against the ideals of the West.

(10)

Peripheral warfare conducted by Chavez also includes the use of “ALBA houses,” ostensible medical offices for the poor that serve as recruitment and indoctrination centers for his supporters in neighboring countries… ALBA houses are modeled on Cuba’s Barrio Adentro program, which it has utilized for years to infiltrate spies and agitators into neighboring countries under the guise of doctors, coaches and advisers to help the poor… What is given up by ignoring a tyrant’s provocations is the ability to actively prevent the incremental destruction of democratic institutions that solidify his power.

(16)

There exists a mistakenly view of the interactions between disparate extremist organizations and terrorist groups internationally. This “burqa-bikini paradox” – the premise that culturally or ideologically distinct actors couldn’t possibly be cooperating to any significant degree – has frequently been the default position of journalists, the diplomatic community, and even some in the intelligence community.

Douglas Farah, a former Latin America correspondent for the Washington Post and now a senior fellow at the International Assessment and Strategy Center, challenged this premise at a December 2008 Capitol Hill briefing titled “Venezuela and the threat it Poses to U.S. and Hemispheric Security”:

These lines that we think exist where these groups like Iran – well they’re a theocracy, or Hezbollah, they’re religiously motivated, they won’t deal with anyone else – bullshit! They will deal with whoever they need to deal with at any given time to acquire what they want…. And the idea that someone won’t deal with Hezbollah because they don’t like their theology is essentially horseshit. You can document across numerous times and numerous continents where people of opposing views will do business together regardless of ideology or theology.

(17)

It is no stretch of logic to surmise that terrorist groups are the natural allies of authoritarian regimes. But throughout the 1970s and ‘80s, there was a battle in Washington between those who believed that the Soviet Union was complicit in terrorism and those who maintained that the Soviets eschewed it as a tactic. The official policy of the Soviets during the Cold War was to declare its opposition to terrorism while unofficially supporting and supplying proxy terrorist groups. But in 1970 Moscow had grown bold enough to train terrorists to overthrow the Mexican government and set up a satellite totalitarian state just across the U.S. Border.

(20)

Carlos (the Jackal) “was given a staff of 75 to plot further deaths and provided with guns, explosives and an archive of forged papers” by the East Germans. He was provided with safe houses and East German experts to ensure that his phones were not bugged, and even his cars were repaired by the Stasi.

(21)

The recovery of Stasi files had proven that the extent of Soviet bloc involvement in terrorism was far greater than even the CIA and other security agencies had considered. Throughout the Cold War, much of the conventional media and the foreign policy establishment often dismissed reports that the Soviets were sponsoring international terrorism or that the Marxist terrorists of Europe might be intermingling with Maoists in Latin America.

Some analysts and scholars referred to the writing of Karl Marx and Lenin to shot that the, and hence the Soviets, were ideologically opposed to terrorism… This and other tenants of Marxist-Leninist theory were often used to claim an ideological aversion to Soviet terror sponsorship.

(22)

In 1916 Lenin wrote to Franz Koritschoner, one of the founders of Austria’s Communist Party, telling him that the Bolsheviks “are not at all opposed to political killing… but as revolutionary tactics individual attacks are inexpedient and harmful. Only the mass movement can be considered genuine political struggle. Only in direct immediate connection with the mass movement can and must individual terrorist acts be of value.

(24-25)

Soviet Use of Communist Party Front Groups in the United Nations

The CPSU’s International Department was tasked with controlling the policy of the world communist movement. From 1955 to 1986, Boris Ponomarev was the chief of this department, which became the premier Soviet agency for fomenting and supporting international terrorism.

Under Ponomarev, the CPSU founded the Lenin Institute, which trained communist from Western and Third World countries in psychological warfare and propaganda and in guerilla warfare. Seeing the potential of “liberation movements” and “anti-imperialist” movements as proxy forces against the West, the CPSU also founded in 1960 the Peoples’ Friendship University (renamed Patrice Lumumba University in 1961) to train “freedom fighters” from the Third World who were no Communist Party members.

The International Department was also in charge of setting up front groups and nongovernmental organizations (NGOs) that could advocate by proxy for Soviet aims at the United Nations (UN) and other international governments. According to a U.S. House of Representative Subcommittee on Oversight report on February 6, 1980, Soviet subsidies to international front organizations exceeded $63 million in 1979 alone.

The report noted that the KGB and the Central Committee “actively promote” the UN imprimatur of the NGO front groups. The international Department controlled the NGOs and held coordinated meetings twice a year, and an official of the Soviet journal Problems of Peace and Socialism (also known as World Marxist Review) would always attend.

According to the report, Anatoly Mkrtchyan, the Soviet director of the External Relations Division of Public Information, was in charge of the NGO section.

Source: At the U.N., Soviet Fronts Pose as Nongovernmental Organizations Juliana Geran Pilon

https://www.heritage.org/global-politics/report/the-un-soviet-fronts-pose-nongovernmental-organizations

(32)

After Arafat started the First Intifada in 1987, both the Soviet Union and Cuba increased military support to the Palestinians, often portraying U.S. and Israeli actions in the Middle East as hegemonic aggression against unarmed Palestinian victims.

In 1990 Havana sent assistance to Iran following an earthquake, and Iran started buying biotechnology products from Cuba. In the last 1990s Castro made a number of bilateral agreements with Iran, and several high-level delegations from Iran made trips to Cuba.

(33)

In 1962, the CPSU helped to establish the Paris-based Solidarite terrorist support network that was masterminded by Henri Curiel. Curial was an Egyptian communist born to an Italian Jewish family who ran a highly successful clandestine organization providing everything from arms to safe houses to actionable intelligence for terrorist group from Brazil to South Africa.

In 1982 a U.S. National Intelligence Estimate stated that Curiel’s Solidarite “has provided support to a wide variety of Third World leftist revolutionary organizations,” including “false documents, financial aid, and safehaven before and after operations, as well as some illegal training in France in weapons and explosives.”

Besides the direct support and training of terrorists, the Soviets made ample use of front groups that posed as religious organizations, academic institutions, or human rights advocates. A 1980 CIA report titled Soviet Covert Action and Propaganda stated:

At a meeting in February 1979 of World Peace Council (WPC) officials, a resolution was adopted to provide “uninterrupted support for the just struggle of the people of Chile, Guatemala, Uruguay, Haiti, Paraguay, El Salvador, Argentina and Brazil.” Without resort to classified information, from this one my logically conclude that the named countries are targets for Soviet subversion and national liberation struggles on a continuing basis. One might interpret “uninterrupted support for the just struggle” to mean continuing financial and logistic support to insurrection movements.

(34)

A former senior GRU officer confirmed this when he made the following statement:

…” If I give you millions of dollars’ worth of weapons, or cash, I have a small right to expect you to help me. I won’t tell you where to place the next bomb, but I do expect to have a little influence on your spheres of action. And if someone later arrests an Irishman, he can honestly say that he never trained in the Soviet Union. And he still believes he is fighting for himself.”

(38)

The point that Colby and Sterling were making was that the Soviets supported terrorist groups as proxy forces, specifically to retain the appearance of distance from their activities. The more important point was that international terrorist groups would have been far less prodigious, and far less deadly, without the support that they received from the Soviet Union and its satellite states…. The Soviet aspect could be seen as giving these groups a “do-it-yourself kit for terrorist warfare.”

(41)

According to [Former Secretary of Defense Robert] Gates, “We would learn a decade later that [CIA analysis] had been too cautious. After the communist governments in Eastern Europe collapsed, we found out that the Easter Europeans (especially the East Germans) indeed not only had provided sanctuary for West European ‘nihilist’ terrorists, but had trained, armed and funded many of them.

(49)

She (Leila Khaled) has also been a regularly scheduled speaker at the World Social Forum.

On May 26, 1971, Khaled told the Turkish newspaper Hurryet that:

The Popular Front for the Liberation of Palestine (PFLP) sends instructors to Turkey in order to train Turkish youth in urban guerrilla fighting, kidnapping, plan hijackings, and other matters… In view of the fact that it is more difficult than in the past for Turks to go and train in PFLP camps, the PFLP is instructing the Turks in the same way as it trains Ethiopians and revolutionaries from underdeveloped countries. The PFLP has trained most of the detained Turkish underground members.

Within ten years, terrorist attacks in Turkey would be killing an average of nine to ten people per day.

Source: Sterling, Terror Network

(52)

The Baath Party’s founders were educated at the Sorbonne in Paris, where, incidentally, an inordinate number of the world’s former dictators were schooled. Commenting on this phenomenon, Egyptian journalist Issandr Elamsani said that Arab intellectuals still see the world through a 1960s lens: “They are all ex-Sorbonne, old Marxists, who look at everything through a postcolonial prism.”

The Sorbonne in the 1960s was one of the intellectual centers of radial political science. In the tradition of the Jacobins, it offered a pseudo-intellectual foundation for end-justifies-means terrorism, which many of its graduates – among them Cambodian dictator Pol Pot, Peruvian terrorist leader Abimeal Guzman, intellectual arbiter of the Iranian revolution Ali Shariati, and Syrian Baathist Michel Aflaq – would use to justify mass murder.

(60-61)

Aleida Guevara, the daughter of Che, made a trip to Lebanon in 2010 to lay a wreath on the tomb of former Hezbollah leader Abbas al-Musawi. At the ceremony, she echoed [Daniel] Ortega’s sentiments, saying, “I think that as long as [the martyr’s] memory remains within us, we will have more strength, and that strength will grow and develop, until we make great achievements and complete our journey to certain victory.”. Guevara later told supporters while visiting Baalbek, “If we do not conduct resistance, we will disappear from the face of the earth.” To make sure that the international press understood the subtext, Hezbollah’s official in the Bekaa Valley said, “We are conducting resistance for the sake of liberty and justice, and to liberate our land and people from Zionist occupation, which receives all the aid it needs from the U.S. administration.

Though Guevara was parroting what has become standard rhetoric among revolutionaries in all parts of the world, her visit had the potential to become controversial. Just three years earlier, in 2007, she and her brother Camilo had visited Tehran for a conference that was intended to emphasize the “common goals” of Marxism and Islamist radicalism.

Titled “Che Like Chamran,” the conference was a memorial to the fortieth anniversary of Che Guevara’s death, which happened to coincide with the twenty0sizzth anniversary of the death of Mostafa Chamran. Chamran, a radical Khomeinist who founded the Amal terrorist group in Lebanon, went to Iran in 1979 to help the mullahs take over and died in 1981 in the Iran-Iraq War (or, according to some, in a car accident.

Speaker Mortaza Firuzabadi, a Khomeinist radical, told the crowd that the mission of both leftist and Islamist revolutionaries was to fight America “everywhere and all of the time,” adding, “Our duty is to the whole of humanity. We seek unity with revolutionary movements everywhere. This is why we have invited the children of Che Guevara.”

…He ended his speech with an entreaty to all anti-American revolutionaries in the world to accept the leadership of Iran’s Mahmoud Ahmadinejad and his revolutionary regime.

Qassemi returned once again to the podium at this point. “The Soviet Union is gone,” Qassemi declared. “The leadership of the downtrodden has passed to our Islamic Republic. Those who wish to destroy America must understand the reality.

Though it has been treated as a rarity by much of the Western media, collaboration between radical groups that might appear to have little in common have included joint operations of far-right, fascist, and neo-Nazi groups with far-left, Marxist and Islamist groups. These collaborations go back well before World War II.

The widespread misconception that a philosophical or religious wall of separation exists between the extremist ideological movements of the world is not only demonstrable false, it is highly detrimental to a proper analysis of the terrorist threat and to the public’s understanding of counterterrorism efforts. This myth has served well the forces of subversion.

The small subset of the population that is drawn to extremist movements is not limited to those who process the same or a similar ideology but instead includes those who tend to seek personal fulfilment from extremism itself. Ideology can be quite malleable when militants see an opportunity to take advantage of the popularity of a more militant group, regardless of any ideological differences between them. In fact, these groups have often found common cause soon after seeing a rival group begin to dominate international headlines.

(64-65)

One of the principal objectives of a terrorist attack that is often overlooked is the expected overreaction of the state in response to the threat.

Feltrinelli’s thesis, like those of many terrorist theorists before and after, was that this would bring “an advanced phase of the struggle” by forcing “an authoritarian turn to the right”.

Feltrinelli is emblematic of the ideologically itinerant radicals who wreaked havoc in the 1960s and 1970s in Europe, Latin America, and the Middle East. He was a close friend of Fidel Castro’s, attended the Tricontinental Conference in 1966, and published its official magazine, Tricontinental, in Europe after the event… (he) began wearing a Tupamaros uniform on his return to Italy. There Feltrinelli built his own publishing empire, flying to Moscow to secure the publishing rights to Boris Pasternack’s Dr. Zhivago and publishing Giuseppe Tomasi de Lampedusa’s bestseller The Leopard, … The profit from these blockbusters allowed him to fill bookstores throughout Italy with radical manifestos and terrorist literature.

On March 15, 1972, the police found Feltrinelli’s body in pieces at the foot of a high-voltage power line pylon. He had been placing explosives on the pylon with a group of fellow terrorists when one of his own explosives detonated accidentally.

(71)

According to the Aryan Nations’ website, the premise that could bridge the ideological gap between these ostensibly disparate worldviews that Muslims are of the same “Aryan” lineage. This view was not hard to concoct. Adolph Hitler’s minister of economics, Hjalmar Schacht, had professed a similar theory which was one promoted by King Darius the Great: the Persian bloodline was of Aryan lineage. This, Schacht argued it made the Persians – and therefor, somehow, all Muslims – the natural allies of Hitler’s vision of a superior Aryan race that should rule the world.

(72-73)

The rise of the Third Reich became a rallying point for many Muslim leaders, who fostered a bit of Muslim mythmaking by claiming that both Hitler and Mussolini were closet Muslims. One rumor had it that Hitler had secretly converted to Islam and that his Muslim name was Hayder, translated as “the Brave One”.” Mussolini, the rumors told, was really an Egyptian Muslim name Musa Nili, which translated into “Moses of the Nile.”

As far back as 1933, Arab nationalists in Syria and Iraq were supporting Nazism.

Arab support for Hitler was widespread by the time he rose to power. And when the Nazis announced the Nuremburg Laws in 1935 to legalize the confiscation of Jewish property, “telegrams of congratulations were sent to the fuhrer from all over the Arab and Islamic world.”

It was Germany’s war against the British Empire that motivated much of the early support for the Nazi regime. Hitler was, after all, fighting the three shared enemies of Germany and the Arab world at the same time: Zionism, communism, and the British Empire.

After World War II, many German officers and Nazi Party officials were given asylum in the Middle East, mostly in Syria and Egypt, where they were utilized to help set up clandestine services throughout the region – this time in support of many of the anticolonialist forces fighting the British and French.

(74)

Ronald Newton, a Canadian academic who wrote The Nazi Menace in Argentina, 1931-1947… thesis was that the tales of Nazi-fascist settlement in Argentina was the result of British disinformation, designed to thwart postwar market capture of Argentina by the United States. The theory was refuted in 1998 after Argentina president Carlos Menem put together a commission to study the issue.

(85)

The stated aim of right-wing extremist groups had always been to bring down the leftist democratic state model and bring about a national socialist or fascist state. But that ideology began to devolve in the 1980s as neo-Nazi groups started to see the dame and legitimacy that was afforded to left-wing terrorist groups that were committing far more violent acts and seemed to be rewarded proportionately.

Two years after Palestinian terrorists killed eleven Israeli team members at the Munich Olympics in 1972, PLO chairman Yasser Arafat was invited by the United Nations to address its General Assembly, and the PLO was awarded UN observer status shortly after that. Moreover, by the 1980s the PLO had been accorded diplomatic relations with more countries than Israel had.

(91)

In 1969, Qaddafi became the chief financier of terrorism of every stripe throughout the world. And though he became known as the principal donor to worldwide leftist groups, he began his terrorist franchise with those of the extreme right.

(93)

In his book Revolutionary Islam, Carlos tried to join the two strongest currents of revolutionary terror, declaring that “only a coalition of Marxists and Islamists can destroy the United States.”

Carlos’s book would be little noticed until Hugo Chavez, speaking to a gathering of worldwide socialist politicians in November 2009, called him an important revolutionary fighter who supported the Palestinian cause. Chavez said during his televised speech that Carlos had been unfairly convicted and added, “They accuse him of being a terrorist, but Carlos really was a revolutionary fighter.”

(97)

“There is a revolution going on in Venezuela, a revolution of an unusual kind – it is a slow-motion revolution.” Thus, declared Richard Gott in an interview with Socialist Worker on February 12, 2005. Gott, a British author and ubiquitous spokesman for all things Chavez and Castro, is not the first to note the nineteenth-century pedigree of Chavez’s 21stCentury Socialism.

The incremental implementation of socialism was the dream of the Fabian Society, a small but highly influential political organization founded in London in 1884… The logo of the Fabian Society, a tortoise, represented the group’s predilection for a slow, imperceptible transition to socialism, while its coat of arms, a “wolf in sheep’s clothing,” represented its preferred methodology for achieving its goal.

(98)

In a 1947 article in Partisan Review, [Arthur] Schlesinger Jr. stated, “there seems to be no inherent obstacle to the gradual advance of socialism in the United States through a series of New Deals.”

Gradualism has always been considered “anti-revolutionary” in communist and socialist circles. But pragmatism has taken the place of idealism after the events of 9/11 increased international scrutiny on radical groups, forcing revolutionists like Chavez and Ahmadinejad to use the Fabian strategy as a “soft subversion” tactic with which to undermine their enemies. In the past decade, Chavez and his allies in Lain America have all embraced Ahmadinejad’s regime, and all have developed their strategic relationship based on mutual support for this incremental subversion.

(99-100)

Castro has had a lot of practice in the art of subversion. Within a short time after he came to power in Cuba, he began trying to subvert other governments in Latin America and the Caribbean. On may 10, 1967 Castro sent an invasion force to Machurucuto, Venezuela, to link up with Venezuelan guerillas to try and overthrow the democratic and popular government of President Raul Leoni.

Led by Arnaldo Ochoa Sanchez, the invasion force was quickly vanished, and the Venezuelan armed forces, with the help of peasant farmers leery of the guerillas, pacified the remaining guerrilla elements before the end of the year. Then the Venezuelan government issues a general amnesty to try and quell any violence from the remaining guerrilla holdouts. But the PRV, Red Flag and the Socialist League continued to operate clandestinely. Douglas Bravo, the Venezuelan terrorist who inspired Carlos the Jackal, remained the intransigent leader of the PRV. One of Bravo’s lieutenants was Adan Chavez, Hugo’s older brother, who would serve as Hugo’s liaison to the radical elements throughout for years to come.

After suffering calamitous defeats at the hand of the Venezuelan armed forces, the PRV decided the best way to continue the revolution would be to infiltrate the “system” and subvert it from within. In 1970, they would first make a move to infiltrate the armed forces. Bravo first contacted Lt. William Izarra in 1920. A year later, Chavez entered military school and started to recruit leftist military members to what became a clandestine fifth-column groups, the Revolutionary Bolivarian Movements. The failed 1992 coup that launched Hugo Chavez’s political career would be planned and executed jointly by the MBR, PRV, Socialist League and Red Flag.

After his release in 1994, Chavez spent six months in Colombia receiving guerrilla training, establishing contacts with both the FARC and the ELN of Colombia, and even adopting a nom de Guerra, Comandante Centeno.

Once he was elected president four years later, he would repay the Colombian guerrillas with a $300 million “donation” and thank Castro with a subsidized oil deal.

Though Chavez would denounce Plan Avila often, it would be his own decision to order its activation in 2002 that would provoke his own military to remove him from power.

Chavez seemed to take the near-death experience as a sign from divine providence of his right to rule and began a purge of the military and the government of anyone who might later threaten his power. Chavez then began radicalizing the remainder of the Venezuelan military by replacing its historical training regimen with a doctrine of asymmetric warfare that involved all sectors of society. He would call his new doctrine la guerra de todo el pueblo – “the war of all the people”.

(101-104)

The Revolutionary Brotherhood Plan

While Chavez calls his hemispheric governing plan “21st Century Socialism,” his critics have given it another name – democradura.

Democradura is a Spanish neologism that has come to define the budding autocracies in Lain America that have incrementally concentrated power in the executive branch under the guise of constitutional reform.

A Socialist think tank in Spain, the CEPTS foundation, part of the Center for Political and Social Studies, was founded in Valencia in 1993 by left-wing academics supporting Spain’s socialist Party as well as the FARC and ELN terrorist groups in Colombia. It put together a team of Marxist constitutional scholars to write the new constitutions of Venezuela, Bolivia and Ecuador, turning them into “socialist constitutions” but with variations applicable to each particular country.

(105)

Where Bolivia’s indigenous president Evo Morales used race to marginalize his opposition, Correa used the rhetoric of environmental radicalism to demonize the mining, oil and gas sectors in Ecuador. Anyone who opposed the anthropocentric environmental language in the (new) constitution was called a “lackey” or multinational corporations and oligarchs. This stance also allowed Correa to eventually break the contracts with these companies in order to demand higher government revenues from their operations, which was then used to support government-funded projects in government-friendly provinces.

The process of Marxist constitution making first caught the attention of the revolutionary left during Colombia’s constitutional change in 1991. The Colombian constitution had been in place since 1886, a long time for regional constitutions, and was only able to be changed with some political machination and legal subterfuge.

As M-19 guerrillas began demobilization talks with a weak Colombian government in the late 1980s, the group took advantage of its position to transition from an armed insurgency to a political party. By 1991 M-19 was able to get one of its leaders, Antonio Navarro, included as one of the three copresidents of the constituent assembly that drew up the new constitution.

Navarro was able to negotiate a prohibition against any attempts by the state to organize the population against the armed guerrilla groups. Not only would this provision end up escalating violence in Colombia, but it would inspire other terrorist groups throughout the America to seek both an armed and a “political wing” which would be utilized skillfully to prolong their longevity as insurgents.

After witnessing the ease with which the Colombian constitution was changed, “constitutional subversion” became standard operation procedure for those countries headed by Chavez’s allies.

The former Venezuelan ambassador to the United Kingdom, Jorge Olavarria, assessed the situation with a bit more apprehension and foresight: “The constituent assembly is nothing more than a camouflage to make the world think that the coming dictatorship is the product of a democratic process.

Where most Latin American constitutions contained between 100 and 200 articles, the new Venezuelan constitution had 350, or 98 more than its predecessor. According to Professor Carlos Sabino of Francisco Marroquin University in Guatemala, the essence of the new constitution was “too many rules, no system to enforce them.” (it) “would consolidate an authoritarian government with a legal disguise, necessary in today’s globalized world where the respect for democratic values is the key to good international relations.

(108)

The Defense of Political Sovereignty and National Self-Determination Law would prohibit organizations, as well as individuals, that advocate for the political rights of Venezuelans from accepting funds from any foreign entity. It also prohibited them from having any representation from foreigners and even sponsoring or hosting any foreigner who expresses opinions that “offend the institution of the state.” This law was included with the International Cooperation Law, which would force all NGOs to reregister with the government and include a declared action plan on their future activities, along with a list of any financing that they expected to receive.

(109)

gNGOs are Governmental Non-Governmental Orgs. Fake NGO’s operated by the government.

(110)

The Sandinista government in Nicaragua has been even more aggressive against civil society groups, raiding the offices of long-established NGOs and launching what it called Operation No More lies, a crackdown against those that it accuses of money laundering, embezzlement and subversion.

(111)

At the end of March 2011, former president Jimmy Carter made a trip to Cuba to meet with members of the regime. About the time he arrived, Cuban state television aired a series in which it portrayed independent NGOs as subversive organizations that sought to “erode the order of civil society” in Cuba. The report claimed that “via the visits to the country of some of its representatives and behind the backs of Cuban authorities, these NGOs have the mission of carrying out the evaluations of the Cuban political situation and instructing, organizing, and supplying the counter-revolution.” It accused the organizations of hiding “their subversive essence [behind] alleged humanitarian aid.” The series featured Dr. Jose Manuel Collera, who was revealed as “Agent Gerardo,” a Cuban spy who had infiltrated the NGOs in the United States “to monitor their work and representatives.”

Along with thwarting the oversight power of NGOs in Venezuela, Chavez also included a number of “economic” laws designed to put the stamp of legitimacy on his new “communal” economic system that had caused shortages throughout the country… These laws made communes the basis of the Venezuelan economy and established “People’s Power” as the basis of local governance. It is codified as being responsible to the “revolutionary Leadership,” which is Chavez himself. This effectively supplanted the municipalities and regional governments.

(117) 

Managing the Media

Speaking in September 2010 at a Washington event to celebrate the sixtieth anniversary of Radio Free Europe/Radio Liberty, the chairman of the Broadcasting Board of Governors Walter Issacson, warned, “We can’t allow ourselves to be out-communicated by our enemies. There’s that Freedom House report that reveals that today’s autocratic leaders are investing billions of dollars in media resources to influences the Global opinion… You’ve got Russia Today, Iran’s Press TV, Venezuela’s TeleSUR…”

Their techniques are similar: hire young, inexperienced correspondents who will toe the party line as TV reporters, and put strong sympathizers, especially Americans, as hosts of “debate” shows.

Where normal media outlets will film only the speakers at such an event, these state-sponsored media units will often turn the cameras toward the audience in order to capture on film those in the audience who may be government critics. Their purpose for this is twofold – to later screen the video to see who might be attending such a conference and to intimidate exiles from attending such events.

(118)

TeleSUR’s president, Andres Izarra, is a professional journalist who formerly worked for CNN en Espanol. He also serves as Chavez’s minister of communications and information. Izarra said of TeleSUR’s launch: “TeleSUR is an initiative against cultural imperialism. We launch TeleSUR with a clear goal to break this communication regime.”

In a 1954 letter to a comrade, Fidel Castro wrote, “We cannot for a second abandon propaganda. Propaganda is vital – propaganda is the heart of our struggle.

“We have to win the war inside the United States, said Hector Oqueli, one of the Rebel leaders. And after the Sandinistas first took power in Nicaragua in the 1980s, the late Tomas Borge, who served as the interior minister and head of state security for the Sandinista regime, told Newsweek, “The battle for Nicaragua is not being waged in Nicaragua. It is being fought in the United States.

It had not been difficult for the revolutionary left in Latin America to find willing allies in the United States to help with its propaganda effort. An illustrative example is William Blum, the author of several anti-American books that have called U.S. foreign engagements “holocausts”. Blum has described his life’s mission as “slowing down the American Empire… injuring the Beast.” Blum’s treatment of U.S. involvement in Latin America is noteworthy, because it is emblematic of what often passes as scholarship on the subject and because it gets repeated in many universities where he is often invited to speak to students… In January 2006, Blum’s Rogue State got an endorsement by Osama bin Laden, who recommended the book in an audiotape and agreed with Blum’s idea that the way the United States could prevent terrorist attacks was to “apologize to the victims of American Imperialism.”

Examples of bad scholarship follow…

Blum’s book is typical of a genre that has long eschewed scholarship for sensationalized anti-Americanism. At the summit of the Americas in April 2009, Chavez handed President Obama a copy of Open Veins of Latin America by Eduardo Galeano, about which Michael Reid, the Americas editor at The Economist, wrote, [Galeano’s history is that of the propagandist, a potent mix of selective truths, exaggeration and falsehood, caricature and conspiracy. Called the “Idiots Bible” by Latin American scholars, Galeano’s 1971 tome was translated to English by Cedric Belfrage, a British journalist and expatriate to the United States who was also a Communist Party member and an agent for the KGB.

The Artillery of Ideas

Another Chavez propaganda effort designed to reach English-speaking audiences is the state funded newspaper Correo del Orinoco, named for a newspaper started by Simon Bolivar in 1818.

(121)

Un April 2010 Chavez held a celebration on the eight anniversary of the coup that earlier had removed him from office for two days. He named the celebration “Day of the Bolivarian Militias, the Armed People and the April Revolution” and held a swearing in ceremony for 35,000 new members of his civilian militia. As part of the festivities, Chavez also had a swearing in ceremony for a hundred young community media activists, calling them “communicational guerrillas.” This was done, according to Chavez, to raise awareness among young people about the “media lies” and to combat the anti-revolution campaign of the opposition-controlled private media.

(122)

The most notorious propaganda and coverup operation to date has been that of the Puente Llaguno shooting in 2002, in which nineteen people were killed and sixty injured as Chavez’s henchmen were videotaped shooting into a crowd of marchers from a bridge overhead.

(124)

According to Nelson (the author of The Silence and the Scorpion) the reason that Chavez felt the need to go after the Metropolitan Police was because they were the largest group in the country, aside from the army. This, feared Chavez, made them a potential threat for another coup against his regime. After he was briefly ousted from office in 2002, Chavez skillfully utilized the canard that the Metropolitan police had fired the first shots at the Bolivarian Circles as an excuse to take away much of their firepower and equipment, leaving them only with their .38 caliber pistols. And one a Chavez loyalist took over as mayor of Caracas, the Metropolitan Police were completely purged. According to Nelson, loyalty to Chavez’s political party became much more important than expertise or experience on the police force.

In January 2007, the President of TeleSUR, Andres Izarra, revealed the thinking behind Chavez’s campaign against the media: “We have to elaborate a new plan, and the one that we propose is the communication and informational hegemony of the state.”

(131)

A report done for the United Nations by the Observatory for the Protection of Human Rights Defenders said that verbal attacks against anyone “who dared to criticize the policies of President Ortega or his government… were systematically and continuously taken up by the official or pro-Government media.” The reports, issued in June 2009, stated:

President Ortega’s government tried to silence dissident voices and criticisms of Government policies through members of the government who verbally assaulted demonstrators and human rights defenders as well as the Citizens Council (Consejos de Poder Ciudadno – CPC) who hampered the NGOs’ activities and physically assaulted defenders. In this context, 2008 saw numerous attacks against human rights defenders and attempts to obstruct their activity…

These Citizens’ Councils were taken directly from the “Revolutionary Brotherhood” plan and are close facsimiles of groups like the Bolivarian Circles in Venezuela. Ortega claimed in July 2007 that “more than 6,000 [CPCs] has been formed,” and “around 500,000 people participated in CPCs.”

(142)

Managing the Military

Daniel Patrick Moynihan: More and more the United Nations seems only to know of violations of human rights in countries where it is still possible to protest such violations… our suspicions are that there could be a design to use the issue of human rights to undermine the legitimacy of precisely those nations which still overserve human rights, imperfect as that observance may be.” (871)

The Department of State Bulletin. (1975). United States: Office of Public Communication, Bureau of Public Affairs.

The southern Connections was a coordinated effort by far-left supporters of the Castro regime and other leftist governments in Latin America to end the Monroe Doctrine or at least to deter Washington’s policy of intervention against communist expansion in the hemisphere.

(144)

EL Salvador’s civil war, from 1979 until 1992, was emblematic of the Cuba-instigated wars in Latin America. It was Fidel Castro who convinced the various left-wing guerilla groups operating in El Salvador consolidate under the banner of the DRU, officially formed in May 1980. The DRU manifesto stated, “There will be only one leadership, only one military plan and only one command, only one political line.” Fidel Castro had facilitated a meeting in Havana in December 1979 that brought these groups together – a feat that has not been repeated since, as the historic tendency of most leftist terrorist groups in the region have been of splintering after fights over egos and ideological differences.

It was a Salvadoran of Palestinian descent, Schafik Handal, who helped found the Communist Party of El Salvador and who would serve as Castro’s partner in the Central American wars of the era.

(145)

Stealth NGOs

One of the most effective asymmetrical tactics has been the use of dummy NGOs as front groups in Latin America. A number of nongovernmental organizations operating in the region that claim to advocate for human rights actually receive funding from radical leftist groups sympathetic to revolutionary movements in the hemisphere. Many of these groups derive much of their legitimacy from unwitting representative of the European Union, the United Nations and even the U.S. Department of State who often designate them as “special rapporteurs” for human rights reporting.

(146)

Both Cristian Fernandez de Kirchner, the current president, and her husband the late President Nestor Kirchner, were far left radicals in the 1960s and 1970s and filled both of their administrations with ex-terrorists and radicals… Many have accused the Kirchner’s and their allies of blatant double standards on human rights issues – especially in the prosecution of former military members who served during Argentina’s Dirty War from 1976 to 1983.

Since 2003, when Nestor Kirchner took office, the successive Kirchner administrations have aggressively prosecuted hundreds of ex-soldiers, many of who served prior to the beginning of the Dirty War. The double standard arises because not one of the ex-terrorists, who started the Dirty War in the first place, has been prosecuted. The Kirchners, along with far-left judicial activists in the region, have relied on a blatantly unjust tenant of “international human rights law” that says crimes against humanity only apply to representatives of the state, a group that includes military and policy but excludes the terrorists who ignited the guerillas wars.

(148)

Since the late 1990s, the NGO practice of dragging the military into court on allegations of human rights violations has destroyed the careers of some of [Colombia’s] finest officers, even though most of these men were found innocent after years of proceedings.”

According to O’Grady, the enabling legislation that makes this judicial warfare possible is what’s been termed the “Leahy Law,” after its sponsor, Sen Patrick Leahy (D-VT). Under this law, American Military aid can be withdrawn if military offenses are brought against them, even when the credibility of the charges is dubious. O’Grady noted, “The NGOs knew that they only had to point fingers to get rid of an effective leader and demoralize the ranks.”

The legislation that became the Leahy Law was first introduced in 1997 in the Foreign Operations Appropriations Act, and similar language was inserted into the 2001 Foreign Operations Appropriations Act. It has since been used repeatedly against Colombia, which has been a target ever since it became serious about taking on the FARC and took funding from the United States to Implement Plan Colombia, an anti-drug smuggling and counter-insurgency initiative.

(149)

The publicity about Reyes’s death put the spotlight on the situation in Colombia and led researchers to uncover the fact that many of the so-called trade unionists in Colombia were moonlighting as FARC terrorists.

Raul Reyes was the prime example, having begun his career at age sixteen when he joined the Colombian Communist Youth (JUCO), which led him to become a trade unionist at a Nestle plant in his hometown of Caquetá. His position as a Nestle “trade unionist” was a front for his real job, which was influencing, recruiting, and radicalizing fellow workers at a plant for the Colombian Communist Party… Since the beginning of the FARC, and its collaboration and later split with the party, a number of Colombian trade unions have served as way stations for FARC members as they moved from union posts to the ranks of the FARC.

(150)

Uribe was able to turn the tide…. By strategically transitioning from the largely fruitless supply-control methods of Plan Colombia to the population centric counterinsurgency (PC-COIN) methods of Plan Patriota, a later iteration of the original plan that put focus on counterinsurgency.

Where the previous policy had granted a vast demilitarized zone to the FARC in exchange for a proposed peace treaty, Plan Patriota utilized a counter-insurgency strategy that attacked terrorists with physical force. But more importantly, it attached their legitimacy by placing security personnel in remote areas where there had been no state presence before. What this accomplished, more successfully than any of the Colombian military’s previous operational tactics, was to change the populations’ perception of the forty-year insurgency. What had been seen as a conflict between rival political parties was now looked upon as the battle of a legitimate, elected government against illegitimate narco-terrorists.

Revolutionizing the Military

In 2001 the Venezuelan daily Tal Cual published a leaked document from the Directorate of Military Intelligence (DIM) which spelled out a plan to politicize the military. According to the document top military officers were to be divided into “revolutionists” who supported Chavez, “institutionalists” who were considered to be neutral, and “dissidents” who were opposed to the regime. It also advocated for catequesis (Spanish for catechism) to proselytize these officers to accept Chavez’s socialist governing program.

(152-153)

During the Hungarian Uprising in 1956, Andropov “had watched in horror from the windows of his embassy as officers of the hated Hungarian security service were strung up from lampposts” It is said that Andropov was “haunted for the rest of his life by the speed with which an apparently all-powerful Communist one-party state had begun to topple” and was thereafter “obsessed with the need to stamp out ‘ideological sabotage’ where it reared its head within the Soviet bloc.” This obsession made the Soviets much more eager to send in troops whenever other communist regimes were in jeopardy.

…both Castro and Chavez, would develop a Hungarian complex as well, leading to a clampdown on ‘ideological sabotage’ within their respective countries. In 1988 Castro stated, when speaking of the Sandinistas’ use of civilian militias to defend their revolution in Nicaragua, that both Cuba and Nicaragua needed a “committed… people’s armed defense that is sufficient in size, training and readiness, “adding that Salvador Allende hadn’t had a big enough force to prevent the coup that drove him from power in Chile in 1973. It was a rare moment of candor, as the militia is usually touted as the last bastion against a U.S. invasion. But in reality, it is a tool designed to accomplish the prime objective of an aspiring autocrat – to ensure the longevity of the regime. Max Manwaring, writing on Chavez’s use of these civilian militias, stated:

All these institutions are outside the traditional control of the regular armed forced, and each organization is responsible directly to the leader (President Chavez). This institutional separation is intended to ensure the no military or paramilitary organization can control another, but the centralization of these institutions guarantees the leader absolute control of security and social harmony in Venezuela.

Perpetuating the Regime

Started as a jobless protest in 1996, the piquiteros have transformed into what are, according to The Economist, “government rent-a-mobs” consisting of “unemployed protestors receiving state welfare payments.” The piquiteros were co-opted by Nestor Kirchner’s government, through some have splintered since his wife succeeded him.

(154)

In February 2011 the gravity of the effort to militarize Morales’s civilian supporters became far clearer. According to ABC, a Paraguayan daily, Iran was providing the financing for the militia training facility. Called the Military Academy of ALBA, it is located in Warnes, thirty miles north of Santa Cruz. ABC reported that the facility would train both military personnel and civilian militia members from all of the ALBA countries.

(156)

Shortly after Castro’s guerrillas took power in Havana, Cuban embassies in Latin America became recruitment centers and incubators for radical groups and terrorist subversives throughout the hemisphere. Organizing subversive student movements became a priority for Cuban “diplomats,” and the autonomy of the campuses provided easy access and impunity.

A comparison of the student vote to that of the general population at the time provides an illustration of the radicalization of the student body. During the 1960s in Venezuela, students at the Central University typically voted 50 to 60 percent for candidates from the Communist Party of Venezuela and the radical Castroite MIR, while these candidates never broke 10 percent among the general population.

A Venezuela MIR guerrilla noted that their near total domination of the liceos (secondary schools) and the universities led them wrongly to believe that this level of acceptance could be extrapolated to the general population. But in reality, noted the guerrilla, “there was absolutely no mass solidarity with the idea of insurrection.” One MIR cofounder, Domingo Alberto Rangel, noted after renouncing the group’s support for terrorism that “the Left enjoys support among students, but it is unknown among working-class youth, or the youth of the barrios.”

In Colombia, the Industrial University of Santander in Bucaramanga was a haven for that country’s ELN terrorists. In 1965 in Peru, the ELN based itself in the San Cristobal of Huamange National University in Ayacucho, and at the National University in Lima a number of leftist political parties set up operations for MIR terrorists.

Just over twenty years later, after Shining Path and Tupac Amaru terrorists had gained control over a majority of the rural area of Peru and had begun to threaten the capital, the (first) government of President Alan Garcia reluctantly decided to raid the University of San Marcos, the National University of Engineering, and a teacher’s college – three schools that had long been known as terrorist havens.

This kind of autonomy without accountability is a policy that invited terrorist infiltration among impressionable young people.

(159)

Like guerrilla groups in many countries in Latin America, Mexico’s also have a cadre of supporters in NGOs who purport to be human rights advocates. After the bombing of the FARC camp in Ecuador, instead of denouncing the FARC for hosting Mexican students in a war zone, one Mexican human rights NGO called the operation an “unjustified massacre” and announced that it was planning to sue the Colombian government.

(161)

According to The Miami Herald, [Tareck] El Aissami was born in Venezuela to Syrian parents, and his father, Carlos, was the president of the Venezuelan branch of the Baath Party and was an ardent supporter of Saddam Hussein. El Aissamni’s uncle, Shibili el-Aissami, whose whereabouts are unknown, was a top-ranking Baath Party official in Iraq.

(164)

The extent of Cuban subversion was investigated and reported to Congress as early as 1963, when the Senate Judiciary Committee released a report detailing the activities of Cuban operatives in the hemisphere. The report concluded: “A war of liberation” or ‘popular uprising’ is really hidden aggression: subversion… the design of Communist expansion finds in subversion the least costly way of acquiring peoples and territories without exaggerated risk.” The report elaborated on the goal of Cuban subversion:

Its aim is to replace the political, economic, and social order existing in a country by a new order, which presupposes the complete physical and moral control of the people… That control is achieved by progressively gaining possession of bodies and minds, using appropriate techniques of subversion that combine psychological, political, social, and economic actions, and even military operations, if this is necessary.

(166)

It was reported by a defector that all Sandinista military plans were sent first to Havana to be vetted by Raul Castro and a Soviet handler before any action was taken against the contras.

A State Department background paper also reported that besides the influx of thousands of Cuban “advisers,” nearly all of the members of the new state police organization, the General Directorate of Sandinista State Security, were trained by the Cubans.

Alfonso Robelo, one of the original members of Nicaragua’s five-man junta, told reporters, “this is something that you have to understand, Nicaragua is an occupied country. We have 8,000 Cubans plus several thousand East Bloc people, East Germans, PLO, Bulgarians, Libyans, North Koreans, etc. The national decisions, the crucial ones, are not in the hands of the Nicaraguans, but in the hands of the Cubans… And, really, in the end, it is not the Cubans, but the Soviets.”

While many foreign policy experts and officials in the Carter administration scoffed at the idea of either Soviet of Cuban steering of the Sandinistas, numerous defectors later confirmed it. Victor Tirado, one of the original Sandinistas, wrote in 1991 that “we allowed ourselves to be guided by the ideas of the Cubans and the Soviets.” Alvaro Baldizon, a chief investigator of the Sandinista Ministry of the Interior, said after defecting, “The ones who give the orders are the Cubans…. Every program, every operation is always under the supervision of Cuban advisors.”

Since the Barrio Adentro program began in Venezuela in October 2000, the number of Cubans in the country has grown to somewhere between forty thousand and sixty-five thousand, depending on the source.

(169)

One of the programs instituted by the Cubans that has driven out many of the professional officers is a new system that allows sergeants to be promoted to the rank of colonel simply by what they call “technical merit” – which most officers define as a high level of fealty to the Chavez political program.

(170)

Prior to the 2006 presidential election in Peru, Hugo Chavez set his sights on the country to try to bring it into the ALBA orbit. Besides sending letters of invitation to mayors near the border areas of his allies, Chavez underwrote a number of ALBA houses in rural areas of Peru. The Peruvian government became concerned enough about the ALBA houses that a congressional committee investigated them and issues a report in March 2009 recommending they be shut down. The committee report concluded that Chavez was trying to influence Peruvian politics via the ALBA houses, which had been established without any government-to-government agreement.

A June 2009 incident in the Amazon city of Badua ended the détente. The incident, called the Baguazo, ended in a bloodbath when members and supporters of a radicalized “indigenous rights” group slit the throats of police officers who had been sent to end the group’s roadblock that had closed the city’s only highway for over a month. Leaders of the Interethnic Association for the Development of the Peruvian Rainforest were revealed to have ties to Chavez and Morales and had previously traveled to Caracas to participate in a meeting of radical indigenous groups.

(171 – 172)

Like Soviet communism, Chavez’s 21st Century Socialism can only survive by spreading and enveloping its neighbors, lest too much of a distinction be shown in economic outcomes by its nonsocialist neighbors.

In a July 2008 hearing of the Western Hemispheric Subcommittee of the House Foreign Affairs Committee, Dr. Norman Bailey, a former official of the National Security Council whose specialty was monitoring terrorism by tracking finances, testified that Chavez had spent “$33 billion on regional influence.” Bailey further stated that corruption in the Chavez regime was “nothing less than monumental, with literally billions of dollars having been stolen by government officials and their allies in the private sector over the past nine years.” Bailey also testified that a Chavez government official had his bank accounts closed by HSBC Bank in London, which had deposits of $1.5 billion.”

A large portion of the income derived from both the narco-trafficking and money laundering is funneled to Venezuelan entities and officials and “is facilitated by the Venezuelan financial system, including both public and private institutions.”

* Bailey testimony before the Western Hemisphere Subcommittee

(174)

A Wikileaks cable released in December 2010 revealed that Ortega had been given “suitcases full of cash” in Caracas. “We have firsthand reports that GON [Government of Venezuela] officials receive suitcases full of cash from Venezuelan officials during official trips to Caracas,” a 2008 diplomatic cable written by Ambassador Paul Trivelli stated. The embassy cables also said that Ortega was believed to have used drug money to underwrite a massive election fraud.

The accusations of suitcases of Venezuelan money going to Nicaragua match very closely with an August 2007 case in which a Venezuelan American businessman, Antonini Wilson, was cause at the Ezeiza Airport just outside Buenos Aires with a suitcase packed with $800,000 in cash. According to U.S. prosecutors who ended up in charge of the case, the money was intended for Cristina Fernandez de Kirchner, who was campaigning for (and eventually won) the presidency of Argentina… when Wilson flew home to Key Biscayne immediately after he incident, he reported it to the FBI, fearing (rightly) being set up as the “fall guy,” according to his court testimony. Wilson agreed to wear a wire during his subsequent meetings with Venezuelan officials and to record his phone calls. Three of the officials involved were indicted in the United States and pleaded guilty. Another fled and is still at large.

(179)

Nicaraguan defectors had long reported the drug-trafficking habits of the Sandinista government. Antonio Farach, a defector who had worked as a Sandinista minister in Nicaragua’s embassies in Honduras and Venezuela, told U.S. officials in 1983 that Humberto Ortega, brother of the president and then Nicaragua’s minister of defense, was “directly involved” in drug trafficking.

Farach repeated an oft-reported rationale used by Marxists who moonlight in the drug trade as a sideline to revolution. He states that Sandinista officials believed their trafficking in drugs was a “political weapon” that would help to destroy “the youth of our enemies.” According to Farach, the Sandinistas declared, “We want to provided food to our people with the suffering and death of the youth of the United States.”

(190)

As of 2008, nineteen of the forty-three groups that are officially designated “foreign terrorist organizations” were all linked to the international drug trade, and as much as 60 percent of all terrorist organizations were believed to be linked to the drug trade.

From fiscal years 1999 through March 2010, 329 Iranian nationals have been caught by U.S. Customs and Border Protection.

In March 2005 FBI director Robert Mueller testified before the House Appropriations Committee that “there are individuals from countries with known Al Qaeda connection who are changing their Islamic surnames to Hispanic-sounding names and obtaining false Hispanic identities, learning to speak Spanish and pretending to be Hispanic.

In 2010 the Department of Homeland Security had thousands of what are called “OTMs” – Other Than Mexicans – incarcerated for illegally crossing the southern border. The OTMs consisted of individuals from Afghanistan, Egypt, Iran, Iraq, Pakistan, Saudi Arabia, Yemen and elsewhere.

(199)

Hugo Chavez’s placement of individuals with known ties to terrorist groups in charge of his immigration and identification bureau have long been documented.

(204)

Influenced by Chavez and radical leftist groups in the region, Lopez Obrador staged a populist sit-in in the central square of Mexico City for nearly two months, claiming to be the “legitimate president”.

Rep Jim Kolbe (R-AZ) told several Mexican legislators at the time that he had received intelligence reports that Chavez had been funding AMLO’s Party of the Democratic Revolution. Had Lopez Obrador won, the nefarious influences of Chavez and Ahmadinejad would have moved to America’s doorstep, and the nexus of drug trafficking and terrorism that were already on the border would be an order of magnitude greater.

(207)

In September 2011, El Universal reported that a Spanish court had prosecuted five members of Askapena, the international wing of ETA. Court documents showed that Askapena had been instructed to set up an international relations network by organizing seminar and creating “solidarity committees” in Europe and North and South America.

(208)

The New York Times reported on January 28, 1996 that during the last two months that the Sandinistas were in power, they had granted Nicaraguan citizenship and documentation to over nine hundred foreigners, including terrorists from ETA and Italy’s Red Brigades, three dozen Arabs and Iranians from Islamic terrorist groups, and terrorists from “virtually every guerrilla organization in Lain America”.

(209)

As far back as May 2008, Jackson Diehl, deputy editorial page editor and foreign policy writer for the Washington Post, wrote that Chavez belonged on the State Department’s list of State Sponsors of Terror.

His reported actions are, first of all, a violation of U.N. Security Council Resolution 1373, passed in September 2001, which prohibits all states from providing financing or havens to terrorist organizations. More directly, the Colombian evidence would be more than enough to justify a State Department decision to cite Venezuela as a state sponsor of terrorism. Once cited, Venezuela would be subject to a number of automatic sanctions, some of which would complicate its continuing export of oil to the United States…

(221)

It is this irrational reluctance to properly describe the threat we face from declared enemies that validates those enemies contrived grievances. Almost inversely proportional to our increased prowess in kinetic warfare, we have continually ceded the ideological war that has become the only battlefield on which our enemies are able to make an impact. As Max Manwaring and others have stated, today’s battles are fights for legitimacy. To allow political correctness or misplaced deference to alter the terminology of war is to cede our most valuable territory. To our enemies, deference equals weakness, not civil accommodation.

Another tenet shared by political Islam in the Middle East and 21st Century Socialism in Latin America is that its adherents have declared war not only on the United States and the West in general but on capitalism and free societies as well. TO most of us in the West, this is equivalent to declaring war on gravity, as free exchange and free enterprise are the bases of life and the engines of progress throughout the world.

We enjoy the advantage that our enemies are not only fighting against us but are also fighting against the trajectory of human progress. Our duty is to decide whether we are going to continue to accommodate their superstitions or whether we will confront them before further carnage provides them with false validation.

 

Notes from Black Against Empire: The History and Politics of the Black Panther Party

Black Against Empire: The History and Politics of the Black Panther Party

While I highlighted far more from Black Against Empire: The History and Politics of the Black Panther Party than the below, I decided to limit myself to posting here issues related to changing perceptions of the Panthers following the dismantling of Jim Crow, issues linked to Marxism issues, and international relations.

(121)

But by 1968, even in “Bloody Lowndes,” the political dynamic had changed. As the Civil Rights Movement dismantled Jim Crow through the mid-1960s, it ironically undercut its own viability as an insurgent movement. Whereas activists could sit in at lunch counters or sit black and white together on a bus or insist on registering to vote where they had traditionally been excluded, they were often uncertain how to nonviolently disrupt black unemployment, substandard housing, poor medical care, or police brutality. And when activists did succeed in disrupting these social processes nonviolently, they often found themselves facing very different enemies and lacking the broad allied support that civil rights activists had attained when challenging formal segregation. By 1968, the civil rights practice of nonviolent civil disobedience against racial exclusion had few obvious targets and could no longer generate massive and widespread participation.

(122)

In this environment, Lil’ Bobby Hutton became a very different kind of martyr from King. He was virtually unknown and ignored by the establishment. Hutton had died standing up to the brutal Oakland police; he died for black self-determination; he died defying American empire like Lumumba and Che and hundreds of thousands of Vietnamese had before him. Unlike King in 1968, Lil’ Bobby Hutton represented a coherent insurgent alternative to political participation in the United States—armed self-defense against the police and commitment to the revolutionary politics of the Black Panther Party.

(123)

A Panther press statement said that in addition to support for the “Free Huey!” campaign and the black plebiscite, the Panthers were calling upon “the member nations of the United Nations to authorize the stationing of UN Observer Teams throughout the cities of America wherein black people are cooped up and concentrated in wretched ghettos.” After meeting with several U.N. delegations and talking with the press, the Black Panthers filed for status as an official “nongoverning organization” of the United Nations. While the notion of the black plebiscite was intriguing to many, it failed to gain traction.

(130)

At SNCC’s invitation, student antiwar activists came to see themselves as fighting for their own liberation from the American empire. The imperial machinery of war that was inflicting havoc abroad was forcing America’s young to kill and die for a cause many did not believe in. Young activists came to see the draft as an imposition of empire on themselves just as the war was an imposition of empire on the Vietnamese.59

SDS leader Greg Calvert encapsulated this emerging view in the idea of “revolutionary consciousness” in a widely influential speech at Princeton University that February. Arguing that students them- selves were revolutionary subjects, Calvert sought to distinguish radicals from liberals, and he advanced “revolutionary consciousness” as the basis for a distinct and superior morality: “Radical or revolutionary consciousness . . . is the perception of oneself as unfree, as oppressed— and finally it is the discovery of oneself as one of the oppressed who must unite to transform the objective conditions of their existence in order to resolve the contradiction between potentiality and actuality. Revolutionary consciousness leads to the struggle for one’s own freedom in unity with others who share the burden of oppression.”

The speech marked a watershed in the New Left’s self-conception. Coming to see itself as part of the global struggle of the Vietnamese against American imperialism and the black struggle against racist oppression, the New Left rejected the status quo as fundamentally immoral and embraced the morality of revolutionary challenge. From this vantage point, the Vietnam War was illegitimate, and draft resistance was an act of revolutionary heroism.

(300)

In their move to take greater leadership in organizing a revolutionary movement across race, the Black Panthers sought to make their class and cross-race anti-imperialist politics more explicit. They began featuring nonblack liberation movements on the cover of their news- paper, starting with Ho Chi Minh and the North Vietnamese. They began widely using the word fascism to describe the policies of the U.S. government. Then in July 1969, two weeks before the United Front Against Fascism Conference, the Panthers changed point 3 of their Ten Point Program from “We want an end to the robbery by the white man of our Black Community” to “We want an end to the robbery by the CAPITALIST of our Black Community”

The Black Panther Party held the United Front Against Fascism Conference in Oakland from July 18 to 21.

At least four thousand young radicals from around the country attended the conference. The delegates included Latinos, Asian Americans, and other people of color, but the majority of delegates were white. More than three hundred organizations attended, representing a broad cross-section of the New Left. In addition to the Young Lords, Red Guard, Los Siete de la Raza, Young Patriots, and Third World Liberation Front, attendees included the Peace and Freedom Party, the International Socialist Club, Progressive Labor, Students for a Democratic Society, the Young Socialist Alliance, and various groups within the Women’s Liberation Movement.

Bobby Seale set the tone for the conference, reiterating his oft-stated challenge against black separatism: “Black racism is just as bad and dangerous as White racism.” He more explicitly emphasized the importance of class to revolution, declaring simply, “It is a class struggle.” Seale spoke against the ideological divisiveness among leftist organizations, arguing that such divisiveness would go nowhere. What was needed, he said, was a shared practical program. He called for the creation of a united “American Liberation Front” in which all communities and organizations struggling for self-determination in America could unite across race and ideology, demand community control of police, and secure legal support for political prisoners.

(301)

The main outcome of the conference was that the Panthers decided to organize National Committees to Combat Fascism (NCCFs) around the country. The NCCFs would operate under the Panther umbrella, but unlike official Black Panther Party chapters, they would allow membership of nonblacks. In this way, the Black Panther Party could maintain the integrity of its racial politics yet step into more formal

(311)

The Black Panther Party’s anti-imperialist politics were deeply inflected with Marxist thought.

The Party’s embrace of Marxism was never rigid, sectarian, or dogmatic. Motivated by a vision of a universal and radically democratic struggle against oppression, ideology seldom got in the way of the Party’s alliance building and practical politics.

he asserted that unemployed blacks were a legitimate revolution- ary group and that the Black Panther Party’s version of Marxism transcended the idea that an industrial working class was the sole agent of revolution.

(312)

Nondogmatic throughout its history, the Black Panther Party worked with a range of leftist organizations with very different political ideologies—a highlight being its hosting of the United Front Against Fascism Conference in July 1969.10 The unchanging core of the Black Panther Party’s political ideology was black anti-imperialism. The Party always saw its core constituency as “the black community,” but it also made common cause between the struggle of the black community and the struggles of other peoples against oppression. Marxism and class analysis helped the Black Panthers understand the oppression of others and to make the analogy between the struggle for black liberation and other struggles for self-determination. While the Marxist content deepened and shifted over the Party’s history, this basic idea held constant.

(313)

 

. One of the Panthers’ early sources of solidarity and support was the left-wing movements in Scandinavia. The lead organizer of this support was Connie Matthews, an energetic and articulate young Jamaican woman employed by the United Nations Educational, Scientific, and Cultural Organization in Copenhagen, Den- mark. In early 1969, Matthews organized a tour for Bobby Seale and Masai Hewitt throughout Scandinavia to raise money and support for the “Free Huey!” campaign. She and Panther Skip Malone worked out the logistics of the trip with various left-wing Scandinavian organizations, enlisting their support by highlighting the class politics of the Black Panther Party.

 

(342)

In noninsurgent organizations, established laws and customs are assumed and largely respected. Maintaining organizational coherence may be challenging, but transgressions of law and custom are generally outside of organizational responsibility. Within insurgent organizations like the Black Panther Party, law and custom are viewed as oppressive and illegitimate. Insurgents view their movement as above the law and custom, the embodiment of a greater morality. As a result, defining acceptable types of transgression of law and custom, and maintaining discipline within these constraints, often poses a serious challenge for insurgent organizations like the Black Panther Party. What sorts of violation of law and custom are consistent with the vision and aims of the insurgency?

 

(343)

By the fall of 1968, as the Party became a national organization, it had to manage the political ramifications of actions taken by loosely organized affiliates across the country. The Central Committee in Oak- land codified ten Rules of the Black Panther Party and began publishing them in each issue of the Black Panther. These rules established basic disciplinary expectations, warning especially against haphazard violence that might be destabilizing or politically embarrassing. They prohibited the use of narcotics, alcohol, or marijuana while conducting Party activities or bearing arms. The Party insisted that Panthers use weapons only against “the enemy” and prohibited theft from other “Black people.” But they permitted disciplined revolutionary violence and specifically allowed participation in the underground insurrectionary “Black Liberation Army.”

 

(344)

 

The Black Panther Party derived its power largely from the insurgent threat it posed to the established order—its ability to attract members who were prepared to physically challenge the authority of the state. But this power also depended on the capacity to organize and discipline these members. When Panthers defied the authority of the Party, acted against its ideological position, or engaged in apolitical criminal activity, their actions undermined the Party, not least in the eyes of potential allies. The Panthers could not raise funds, garner legal aid, mobilize political support, or even sell newspapers to many of their allies if they were perceived as criminals, separatists, or aggressive and undisciplined incompetents. The survival of the Party depended on its political coherence and organizational discipline.

As the Party grew nationally and increasingly came into conflict with the state in 1969, maintaining discipline and a coherent political image became more challenging. The tension between the anti- authoritarianism of members in disparate chapters and the need for the Party to advance a coherent political vision grew. One of the principal tools for maintaining discipline—both of individual members and of local chapters expected to conform to directives from the Central Committee—was the threat of expulsion.

(345)

 

Hilliard explained the importance of the purge for maintaining Party discipline: “We relate to what Lenin said, ‘that a party that purges itself grows to become stronger.’ The purging is very good. You recognize that there is a diffusion within the rank and file of the party, within the internal structure of the party.

As the Party continued to expand in 1969 and 1970, so did conflicts between the actions of members in local chapters across the country and the political identity of the Party—carefully groomed by the Central Committee.

(346)

 

The resilience of the Black Panthers’ politics depended heavily on sup- port from three broad constituencies: blacks, opponents of the Vietnam War, and revolutionary governments internationally. Without the sup- port of these allies, the Black Panther Party could not withstand repressive actions against them by the state. But beginning in 1969, and steadily increasing through 1970, political transformations undercut the self-interests that motivated these constituencies to support the Panthers’ politics.

(351)

 

Cuban support for the Black Panthers also shifted during the late 1960s. When Eldridge Cleaver fled to Cuba as a political exile in late

1968, Cuba not only provided safe passage and security but promised to create a military training facility for the Party on an abandoned farm out- side Havana. This promise was consistent with the more active role Cuba had played in supporting the Black Liberation Struggle in the United States in the early 1960s, when it sponsored the broadcast of Robert Williams’s insurrectionary radio program “Radio Free Dixie,” as well as publication of his newspaper, the Crusader, and his book Negroes with Guns. But, as the tide of revolution shifted globally toward the end of the decade, security concerns took on higher priority in Cuban policy. Eager to avoid provoking retaliation from the United States, Cuba distanced itself from the Black Liberation Struggle, continuing to allow exiles but refraining from active support of black insurrection. The government never opened a military training ground for the Panthers, instead placing constraints on the political activities of Panther exiles.34

As the United States scaled back the war in Vietnam; reduced the military draft; improved political, educational, and employment access for blacks; and improved relations with former revolutionary governments around the world, the Black Panthers had difficulty maintaining support for politics involving armed confrontation with the state.

More comfortable and secure with the ability of mainstream political institutions to redress their concerns—especially the draft—liberals went on the attack, challenging the revolutionary politics of the Black Panther Party.

(352)

 

Many Panthers hoped that Huey would resolve the challenges the Party faced and lead them successfully to revolution. But his release had the opposite effect, exacerbating the tensions within the Party. Some rank-and-file Panthers took Huey’s long-awaited release as a pre- lude to victory and a license to violence, and their aggressive militarism became harder to contain. Organizationally, the Party had grown exponentially in Newton’s name but was actually under the direction of other leaders. His release forced a reconfiguration of power in the Party.

Paradoxically, Newton’s release also made it harder for the Party to maintain support from more moderate allies. It sent a strong message to many moderates that—contrary to Kingman Brewster’s famous statement three months earlier—a black revolutionary could receive a fair trial in the United States. The radical Left saw revolutionary progress in winning Huey’s freedom, but many moderate allies saw less cause for revolution.

(359)

 

The Panther 21 asserted that the Black Panther Party was not the true revolutionary vanguard in the United States and hailed the Weather Underground as one of, if not “the true vanguard.” In line with the vanguardist ideology of the Weather Underground, the Panther 21 argued that it was now time for all-out revolutionary violence that they believed would attract a broad following and eventually topple the capitalist economy and the state

(361)

 

 

Dhoruba Bin Wahad explained his decision to desert the Black Panther Party as a response to the increasing moderation of Newton, Hilliard, and the Central Committee and their efforts to appease wealthy donors. In a public statement in May 1971, Dhoruba wrote,

We were aware of the Plots emanating from the co-opted Fearful minds of Huey Newton and the Arch Revisionist, David Hilliard… . Obsession with fund raising leads to dependency upon the very class enemies of our People. . . . These internal contradictions have naturally developed to the Point where those within the Party found themselves in an organization fastly approaching the likes of the N.A.A.C.P.—dedicated to modified slavery instead of putting an end to all forms of slavery.67

(391)

 

To this day, small cadres in the United States dedicate their lives to a revolutionary vision. Not unlike the tenets of a religion, a secular revolutionary vision provides these communities with purpose and a moral compass. Some of these revolutionary communities publish periodicals, maintain websites, collectively feed and school their children, and share housing. But none wields the power to disrupt the status quo on a national scale. None is viewed as a serious threat by the federal government. And none today compares in scope or political influence to the Black Panther Party during its heyday.

The power the Black Panthers achieved grew out of their politics of armed self-defense. While they had little economic capital or institutionalized political power, they were able to forcibly assert their politi- cal agenda through their armed confrontations with the state.

The Black Panther Party did not spring onto the historical stage fully formed; it grew in stages. Newton and Seale wove together their revolutionary vision from disparate strands.

(392)

Nixon won the White House on his Law and Order platform, inaugurating the year of the most intense direct repression of the Panthers. But the Party continued to grow in scope and influence. By 1970, it had opened offices in sixty-eight cities. That year, the New York Times published 1,217 articles on the Party, more than twice as many as in any other year. The Party’s annual budget reached about $1.2 million (in 1970 dollars). And circulation of the Party’s newspaper, the Black Panther, reached 150,000.3

The resonance of Panther practices was specific to the times. Many blacks believed conventional methods were insufficient to redress persistent exclusion from municipal hiring, decent education, and political power.

(395)

The vast literature on the Black Liberation Struggle in the postwar decades concentrates largely on the southern Civil Rights Movement. Our analysis is indebted to that literature as well as to more recent historical scholarship that enlarges both the geographic and temporal scope of analysis.5 Thomas Sugrue in particular makes important advances, calling attention to the black insurgent mobilizations in the North and West, and to their longue durée.This work, however, fails to analyze these mobilizations on their own terms, instead seeking to assimilate these black insurgencies to a civil rights perspective by presenting the range of black insurgent mobilizations as claims for black citizenship, appeals to the state—for full and equal participation. This perspective obscures the revolutionary character and radical economic focus of the Black Panther Party.

(398-399)

The broader question is why no revolutionary movement of any kind exists in the United States today. To untangle this question, we need to consider what makes a movement revolutionary. Here, the writings of the Italian theorist and revolutionary Antonio Gramsci are instructive: “A theory is ‘revolutionary’ precisely to the extent that it is an element of conscious separation and distinction into two camps and is a peak inaccessible to the enemy camp.”17 In other words, a revolutionary theory splits the world in two. It says that the people in power and the institutions they manage are the cause of oppression and injustice. A revolutionary theory purports to explain how to overcome those iniquities. It claims that oppression is inherent in the dominant social institutions. Further, it asserts that nothing can be done from within the dominant social institutions to rectify the problem—that the dominant social institutions must be overthrown. In this sense, any revolutionary theory consciously separates the world into two camps: those who seek to reproduce the existing social arrangements and those who seek to overthrow them.

In this first, ideational sense, many insurgent revolutionary movements do exist in the United States today, albeit on a very small scale. From sectarian socialist groups to nationalist separatists, these revolutionary minimovements have two things in common: a theory that calls for destroying the existing social world and advances an alternative trajectory; and cadres of members who have dedicated their lives to advance this alternative, see the revolutionary community as their moral reference point, and see themselves as categorically different from everyone who does not.

More broadly, in Gramsci’s view, a movement is revolutionary politically to the extent that it poses an effective challenge. He suggests that such a revolutionary movement must first be creative rather than arbitrary. It must seize the political imagination and offer credible proposals to address the grievances of large segments of the population, creating a “concrete phantasy which acts on a dispersed and shattered people to arouse and organize its collective will.”18 But when a movement succeeds in this task, the dominant political coalition usually defeats the challenge through the twin means of repression and con- cession. The ruling alliance does not simply crush political challenges directly through the coercive power of the state but makes concessions that reconsolidate its political power without undermining its basic interests.19 A revolutionary movement becomes significant politically only when it is able to win the loyalty of allies, articulating a broader insurgency.20

In this second, political sense, there are no revolutionary movements in the United States today. The country has seen moments of large-scale popular mobilization, and some of these recent movements, such as the mass mobilizations for immigrant rights in 2006, have been “creative,” seizing the imagination of large segments of the population. One would think that the 2008 housing collapse, economic recession, subsequent insolvency of local governments, and bailout of the wealthy institutions and individuals most responsible for creating the financial crisis at the expense of almost everyone else provide fertile conditions for a broad insurgent politics. But as of this writing, it is an open question whether a broad, let alone revolutionary, challenge will develop. Recent movements have not sustained insurgency, advanced a revolutionary vision, or articulated a broader alliance to challenge established political power.

In our assessment, for the years 1968 to 1970, the Black Panther Party was revolutionary in Gramsci’s sense, both ideationally and politically. Ideationally, young Panthers dedicated their lives to the revolution because—as part of a global revolution against empire—they believed that they could transform the world. The revolutionary vision of the Party became the moral center of the Panther community.

(401)

While minimovements with revolutionary ideologies abound, there is no politically significant revolutionary movement in the United States today because no cadre of revolutionaries has developed ideas and practices that credibly advance the interests of a large segment of the people. Members of revolutionary sects can hawk their newspapers and proselytize on college campuses until they are blue in the face, but they remain politically irrelevant. Islamist insurgencies, with deep political roots abroad, are politically significant, but they lack potential constituencies in the United States.

No revolutionary movement of political significance will gain a foot-hold in the United States again until a group of revolutionaries develops insurgent practices that seize the political imagination of a large segment of the people and successively draw support from other constituencies, creating a broad insurgent alliance that is difficult to repress or appease. This has not happened in the United States since the heyday of the Black Panther Party and may not happen again for a very long time.

Notes from CastroChavism: Organized Crime in the Americas

CastroChavism: Organized Crime in the Americas by José Carlos Sánchez Berzaín, Bolivia’s former Minister of Defense and the author of XXI Century Dictatorship in Bolivia.

(16)

[Venezuela and Bolivia] are dictatorships that reach[ed] power through elections and through successive coups that liquate democracy.

(17)

The two Americas make up an axis of confrontation in which perpetual and arbitrary control of power, on the one handed, branded dictatorship with ideology as a pretext; versus democracy, with respect for human rights, alternation in power, accountability and free elections, declaratively protected by the inter-American system, enshrined – among others – in the inter-American democratic charter.

From 1959 to 1999, the Cuban dictatorship is “Castroism.” From 1999 onwards it is “Castrochavismo,” led by Hugo Chavez until his death.

(18-19)

It began as progressive leftist populism, and was successively called ALBA Movement (Bolivarian Alliance for the Peoples of Our America); the Bolivarian Movement; and after a few years Socialism of the 21st Century.

Castro receives a new source of financing for his conspiratorial and criminal actions with Chavez’s surrender not only of Venezuela’s money and oil but, as we have learned today, of the entire country. This allowed the dictator to reactivate genuine Castroism under the mantle of the Bolivarian Movement, or ALBA, and disguise it as democracy. With Venezuela’s money he started conspiracies, which led to the fall and overthrow of democratic leaders. The first one occurs in Argentina, with the fall of President De La Rua. The second happens in Ecuador and it is Jamil Mahuad who pays the proce. The Third one is the overthrow of President Gonzalo Sanches de Lozada in Bolivia. The fourth is in Ecuador, with the fall of President Lucio Gutierrez. They also overthrew the OAS Secretary General, Miguel Angel Rodriguez, who had just been elected. A false case of corruption was planted in Costa Rica, where Rodriguez ends up being illegally detained, making room for Insulsa to arrive.

The nascent CastroChavista organization expands with Lula da Silva taking power in Brazil with the Workers Party, whose government he used to strengthen the extraordinary flow of economic resources with transnational corruption .A sample of such crimes include the infamous case of “Lava Jato – Odebrecht”

The destruction of democracy becomes noticeable in the exiles, who had been purely Cuban and are now regional – waves of Venezuelans Bolivians, Nicaraguans, Ecuadorians, Argentines, and Central Americans.

(21)

An electoral dictatorship is a political regime that by force or violence concentrates all power in a person or in a group or organization that repressed human rights and fundamental freedoms and uses illegitimate elections, neither free no fair, with fraud and corruption, to perpetuate itself indefinitely in power.”

(23)

Cuba, Venezuela, Bolivia and Nicaragua… are criminal entities that must be separated from politics and must be treated as transnational organized crime from within the framework of the Palermo Convention and other norms, without the immunities or privileges inherent to the heads of State or government.

(24)

Castrochavista dictatorships are in crisis, but are not defeated. They are called out as regimes that violate human rights, that have no rule of law, where there is no division or independence of public powers, and that are narco States and creators of poverty. To remain in power, they apply the uniform strategy of “resisting at all costs, destabilizing democracies, politicizing their situation and negotiating.”

The first element of this strategy, of “retention of all power at all costs,” can be seen in Nicaragua, Venezuela and Cuba – where they imprison and torture political prisoners. The President of the Human Rights assembly in Bolivia has just reported that there are 131 deaths without investigation from killings that the government has committed, and there are more than 100 political prisoners.

(25)

The second element of their strategy is to “destabilize democracies,” for which they conspire against those who accuse them and against the governments that defend democracy. The destabilization range from false news and character assassination of leaders whom they designate as right wing, to criminal acts of terrorism, kidnappings and narco guerrillas.

The third element of their strategy is to “politicize their situation and their criminal acts.” When the dictatorships in Cuba, Venezuela, Bolivia, Nicaragua improperly imprison a citizen, when they torture them, when they evn kill them – they call it defense of the revolution.

These four dictatorships are narco states and, to justify themselves, they argue that “drug trafficking is an instrument of struggle for the liberation of the peoples”

Evo Morales in 2016 at the United Nations said that “the fight against drug trafficking is an instrument of imperialism to oppress the peoples”.

Jesus Santrich fled from Colombia to Venezuela, proclaiming that he had been persecuted by the right. The bosses of the ELN narco-guerrillas of Colombia are under protection in Cuba.

The third element of Castrochavismo, which consists in politicizing their crimes, serves to ensure that when they kill any person they say that they are defending the revolution. When they torture they say they defend the popular process of liberation of peoples and so on.

The fourth element of Castrochavista strategy is to “negotiate”. They negotiate in order to gain time, demoralize the adversary, collect bills from their allies or extort money from third states to gain their support or at least neutralize them.

From these four elements, they survive.

(27)

Political events are based on respect for the “rule of law,” which is simply that “no one is above the law,” on the temporality of public service, on accountability and public responsibility, where you can take on an adversary. But organized crime has no adversaries, it has enemies and the difference between an adversary and an enemy is that the former is defeated or convinced, whereas the latter is eliminated, and this explains the number of crimes that Castrochavismo commits in the Americas.

(30)

The peoples of Cuba, Venezuela, Nicaragua and Bolivia are fighting against the dictatorships that oppress them, but it is not a local or national oppressor, they take on a transnational enemy, united by the objective of retaining power indefiniately as the best mechanism for impunity.

Castrochavismo as a transnational organized crime structure is a very powerful usurper with a lot of money, a lot of criminal armed forces, control of many media and many mercenaries of various specialties at its service, which has put the peoples they oppress in a true and extreme “defenseless condition.”

As long as there are dictatorships there will be no peace or security in the Americas.

(33)

It is vital to differentiate and separate that which is “politics” meaning an activity of public service, from that which is “organized crime” and “delinquency.” Politics with its ideologies, pragmatisms, imperfections, errors, crises, even tainted by corruption is one thing, but another very different things is politics and power under the control of associated criminals who turn their politics into their main instrument for the commission of crimes, the setting up of criminal organizations, the seizure and indefinite control of power with criminal objectives and for the sake of their own impunity.

Politics is legal, meaning that it is conducted in pheres considered to be “just, allowed, according to justice and reason” because it is of order and public service….

(35)

Castro, Maduro, Ortega and Morales are not politicians, they are not corrupted government – they are organized delinquency that holds political power and plans to indefinitely keep holding it. They can no longer keep being treated as politicians, and least of all as State Dignitaries.

(42)

CastroChavist is the label for Fidel Castro and Hugo Chavez’s undertaking that, using the subversive capabilities of the Cuban dictatorial regime and Venezuelan oil, has resurrected – commencing in 1999, the expansion of Castroist, antidemocratic communism with a heavy antiimperialist discourse.

(46-47)

What is happening in Venezuela today is the result of almost two decades of progressive and sustained abuses to freedom and democracy, violation of human rights, persecutions, electoral fraud, corruption, violation of the sovereignty of the country, theft of government and private resources, killing of the freedom of the press, elimination of the rule of law, disappearance of the separation and independence of the branches of government, control of the opposition, imprisonment and forced exile of political opponents, narcotics trafficking and all that may be necessary to make Venezuela a Castroist-model dictatorial “narco-state with a humanitarian crisis.”

The international democratic community has understood that for the sake of their own interests and security, it must preclude Venezuela from turning into the second consolidated dictatorship of the Americas, and prevent the dictatorships of Bolivia and Nicaragua from following that path. Liberating Venezuela is a strategic necessity.

(49)

In Bolivia, the top and perpetual leader of the coca leaf harvesters, Evo Morales, is the head of the Purinational State of Bolivia wherein “by decree of law” he has increated the lawful cultivation of coca by 83% from 12,000 hectares to 22,000 hectares and has increased the cultivation of unlawful coca from the existing 3,000 hectares in 2003 – the year they toppled President Sanchez de Lozada – to the current 50,000 hectares.

Evo Morales’ drug czar Colonel Rene Sanabria was arrested by the DEA for cocaine trafficking and has been sentenced by US judges to 15 years in jail.

(55)

In democracy, corruption is not the rule but the flaw, it is the violation of normalcy, “the misuse of government power to get illegitimate advantages, generally in a secret or private way”, it is “the consistent practice of utilizing the functions and means of the government for the benefit – whether this benefit be financial or otherwise – of those who are involved with it.” In a democracy, there are investigations, prosecution, and punishment with accountability, there is separation and independence of the branches of government, the Rule of Law exists, and there is freedom of the press. On the other hand, however, in dictatorships, corruption is the means, the cause, and the end objective of getting to, and indefinitely remaining in power.

(61)

The Venezuelan dictatorship is the Gordian knot the keeps the Venezuelan people from recovering their freedom and democracy, one that at the same time sustains dictatorships in the America, specifically in Cuba, Bolivia, and Nicaragua as a system of Transnational Organized Crime – they are a real danger not only for this region, but the whole world.

(62)

The hub of narcotics trafficking that Venezuela has been turned into, with the Colombian FARC’s cocaine and with Evo Morales’ coca growers’ unions from Bolivia, has penetrated the entire region and impacts the whole world with serious consequences in security and the wellbeing of people.

(65)

A well-orchastrated international system of public relations, lobbyists who work for the Cuba-Venezuela-Bolivia-Nicaragua group, the subjecting of PetroCaribe countries with bribes of Venezuelan oil, it’s penetration into international organizations, its control over the national news media and its creation and influence over international media, its collusion with important magnates and businessmen, and its repetitive anti-U.S. discourse along with its opening to Russia, China, North Korea, and Iran, have all been factors – that have allowed the existence of the Ortega’s Crime Dictatorship in Nicaragua.

(67)

Cuba with the Castro’s, Venezuela with Chavez and Maduro, Bolivia with Evo Morales, Nicaragua with Daniel Ortega, and Ecuador with Rafael Correa, replaced freedom of the press with a system of control of the information with prior censorship, self-censorship, financial and judicial repression. They appropriated themselves – through transfers under duress, seizures, intervention, and violence – of private news media in order to place them at their service, they have supported and created state media, founded and funded regional media, they manage the official propaganda as a mechanism for extorsion, they use taxes as a means of pressure and retribution, they extort companies regarding the assignment of propaganda, they start and sustain “assassination of reputation” campaigns against journalists and owners of news media.

(74)

Crimes committed by the 21st Century Socialist Regimes range from persecution with the aim of physical torture and killing, judicial trials with false accusations heard by “despicable judges”, the application of the regime’s pseudo-laws violating human rights or of “despicable laws”, restricting freedom of speech or freedom to work, to be employed, or discharge a profession, assassinating the individuals reputation to convert the wrongly accused as an undesirable, subjecting the person into a condition of being defenseless, depriving him/her of a job and much more.

(75)

…they’ve replaced politics with criminal practices in order to totally and indefinitely control political power.

Extortion is a key feature of the Castrochavista methods that is further proof of the Transnational Organized Crime nature of these dictatorships.

Extortion is “the pressure exerted on someone – through threats – to compel them to act in a certain way and obtain a monetary or other type of benefit.” The legal definition of extortion includes “the intimidation or serious threat that restricts a person to do, tolerate the doing or not doing of something for the purpose of deriving a benefit or undue advantage for one’s self or someone else.”

The Castrochavista constitutions have established “the law’s retroactivity” and have suppressed or limited parliamentary immunities in order to keep extorting members of the opposition.

Judges, prosecutors and even attorneys are extorted. Several cases corroborate this, cases, such as Venezuela’s Judge Maria Lourdes Afiuni’s jailing, violations, and tortures; the fired prosecutors and judges who were afterwards prosecuted in the case of Magistrate Gualberto Cusi in Bolivia, as well as the jailing of defense attorneys; the persecution and exile of Magistrates from Venezuela’s Supreme Justice Tribunal “ the legitimate one in exile,” or that of Attorney General Ortega, the assassination of Prosecutor Alberto Nisman in Kirchner’s Argentina, and dozens more.

The imprisonment, torture, humiliations, assassinations, and exile started as extrortions and are dictatorial warning operations in order to ensure the submission of the system it manipulates “setting precedents” of its decision to use extortion to obtain benefits for the dictator and his Organized Crime group who is called government. Benefits range from financial gain, cover up, and impunity, to the indefinite tenure in government.

(78)

Cornered by crises, the dictatorships of Cuba, Venezuela, Nicaragua and Bolivia, have gone into an attack more and the meeting of the Sao Paulo Forum in Havana was the scenario to launch their new phase of destabilization.

Dictatorships attack with forced migration, the generation of internal violence, and destabilization.

(79)

All of the region’s democratic countries are under the pressure of forced migration caused by Venezuela’s dictatorship that has converted on of its shameful problems into a problem for the whole region. Democracies must now deal with problems in: their security, unemployment, provision of health care, their handling of massive numbers of people in transit, identification issues, budgets, and human rights, all because the Castrochavista criminal regime of Nicolas Maduro has transformed its crimes and its effects into a political weapon. Very similar to the so-called “Mariel’s exodus” promoted by Dictator Fidel Castro against the United States, but many folds greater and for an indefinite period.

(81)

The Sao Paulo Forum is 1990 was the dictatorial reaction to the crash of Soviet Communism and was gathered, for the first time, with the objective of addressing the international scenario following the fall of the Berlin Wall and to confront the “neo-liberal” policies. It is the tool with which the Castroist dictatorship formulated the “multiplication of the confrontation axis” strategy, going beyond class struggles to the fight against any elements that may be useful to destabilize democratic governments.

The 21st Century in the Americas is the history of the Castro-Chavista buildup…

The worn-out cliché of “liberation of the peoples” as an “anti-imperialist” argument and slogan for massive demonstrations, has remained to become “the people’s oppression” that is corroborated by the quantity of massacres, assassinations, torture, political prisoners, exiles, and the daily life the people must endure.

(83)

It has become necessary for Americas’ leaders and politicians to clearly differentiate themselves from the criminals who hold power in Cuba, Venezuela, Nicaragua and Bolivia. Not doing so implies the assumed risk of being accomplices and concealers.

(89)

The price for Pablo Iglesias and PODEMOS backing to the investiture of the PSOE would be the sustainment of the dictatorships for which Iglesias works and their funding, are now amply evident in Spain’s new foreign policy aiming to sustain the CastroChavista dictatorships of Cuba, Venezuela, Nicaragua and Bolivia.

(92)

Is the use of force the only options for the dictatorships to leave?

Cuba, Venezuela, Nicaragua and Bolivia are under regimes that after applying all possible simulations and misrepresentations in order to be a revolution, a democracy, populist, leftist, and socialist governments but are nothing by Organized Crime’s organizations that hold power by force.

Alleging self-determination of the nation state while oppressing the citizens and violating their human rights is but another flaw of the CastroChavista dictatorships.

(95)

The parameters to qualify a regime as a dictatorship, an Organized Crime dictatorship, and a criminal government, are set out but existing universal and regional standards, such as: the United Nations Charter, the Universal Declaration of Human Rights, the Charter of Bogota, the Convenant of San Jose, the European Union Treaty, the Palermo Conventions, the Interamerican Democratic Charter, and many more.

(102)

The dictatorial nature of a regime is proven by its violation of all essential components of democracy through the supplanting of the democratic order, manipulation of constituent referendums, consults and elections, down to the imposition of a fraudulent legal framework, a “legal” scheme, that nowadays is the legal system in existence in Venezuela, Nicaragua, Bolivia and Correa’s Ecuador.

(105)

Why Abstention?

To run as a candidate in a dictatorship is to dress up a tyrant as a democrat.

(106)

For elections to be free and fair, there must be “conditions of democracy” in existence, this is the minimum presence of the essential components of democracy that will enable all citizens to be wither voters or be elected, will guarantee an equity of options to the candidates, transparency in the process, impartiality in the electoral authorities, offer guarantees of resources with impartial judges, with freedom of association, freedom of expression, freedom of the press, and guarantees against electoral fraud, timeliness and more.

(109)

In 1961, Cuba’s dictatorship birthed; Nicaragua’s National Liberation Army (ELN) afterwards converted into the Sandinista National Liberation Front (FSLN), then later converted into the 13th of November Revolutionary Movement (MR13N), and the Revolutionary Armed Forced (FAR) in Guatemala. In 1962, it birthed Venezuela’s National Liberation Armed Forces (FALN), the Colombian Self-Defense Forces turned into the Southern Block Forces afterwards turned into the Colombian Armed Revolutionary Forces (FARC). In Peru, it birthed the National Liberation Army (ELN) and the Leftist Revolutionary Movement (MIR), in Bolivia the National Liberation Army (ELN), in Uruguay the Tupamaros, as an urban guerrilla, in Argentina the Montoneros, and in the 70’s the People’s Revolutionary Army (ERP), in Brazil the Revolutionary Movement 8 (MR*), and many more. The Castroist movement did no spare any country from staining it with the blood of guerrillas.

(120)

The OAS has two charters; the Charter of Bogota which birthed the organization and the Interamerican Democratic Charter, with which democracy was institutionalized.

Article 1 of the IDC mandates that “America’s people haec the right to democracy and their government has the obligation to promote and defend it.”

(123)

The Palermo Convention for Human Trafficking should be applied to the Cuban physicians.

(134)

What dictator Nicolas Maduro and his regime insist in presenting as “elections” is a chain of serious crimes to misrepresent the popular sovereignty, sustain the narco-state, and guarantee himself impunity. The “organized crime group” that hold power has committed, and is willing to commit, whatever crime may be necessary to continue receiving the criminal benefits that have taken Venezuela to the current state of its ongoing crisis.

(188)

Fear is an essential component of dictatorships, this is why they kill the “Rule of Law” and supplant it with the “Rule of the State” with despicable laws to enable them to persecute, imprison, dishonor and wrest the property of, citizens.

The foreign enemy is useful in order to blame the United States for all disastrous results from the organized crime that holds political power, such as what the Castros’ have done for so many years and now Maduro, Morales, and their thugs do.

Cuba, Venezuela, Nicaragua, and Bolivia claim the “right” conspires, pays politicians, and wants them toppled, attributing to themselves the position of being “leftist”, socialist, and communist when in reality they are criminal “fascists” whose sole ideology and objective is the total and indefinite control of power along with their illicit enrichment.

(193)

Odebrecht is one of the Brazilian companies implicated in the Forum of Sao Paulo’s criminal network implemented by Lula de Silva with the dictators Fidel Castro and Hugo Chavez with the payments of millions of dollars in bribes.

(202)

Hugo Chavez allied himself with Fidel Castro in 1999 when Cuba agonized during it’s “special period” as a parasite state that, since the breakdown of the Soveit Union, did not have a way to survive. With Venezuela’s oil, Chavez salvaged the only dictatorship there was at that time in the Americas and kick started the recreation of Castroist expansionism under the labels of the Bolivarian Movement, ALBA, and 21st Century Socialism and that is today known as “CastroChavismo”.

 

 

 

Notes from Joint Publication 3-13 Information Operations

Notes from Joint Publication 3-13 Information Operations.

  1. Scope

PREFACE

This publication provides joint doctrine for the planning, preparation, execution, and assessment of information operations across the range of military operations.

Overview

The ability to share information in near real time, anonymously and/or securely, is a capability that is both an asset and a potential vulnerability to us, our allies, and our adversaries.

The nation’s state and non-state adversaries are equally aware of the significance of this new technology, and will use information-related capabilities (IRCs) to gain advantages in the information environment, just as they would use more traditional military technologies to gain advantages in other operational environments. As the strategic environment continues to change, so does information operations (IO). Based on these changes, the Secretary of Defense now characterizes IO as the integrated employment, during military operations, of IRCs in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision making of adversaries and potential adversaries while protecting our own.

 The Information Environment

The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions, which continuously interact with individuals, organizations, and systems. These dimensions are known as physical, informational, and cognitive. The physical dimension is composed of command and control systems, key decision makers, and supporting infrastructure that enable individuals and organizations to create effects. The informational dimension specifies where and how information is collected, processed, stored, disseminated, and protected. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information.

Information Operations

Information Operations and the Information-Influence Relational Framework

The relational framework describes the application, integration, and synchronization of IRCs to influence, disrupt, corrupt, or usurp the decision making of TAs to create a desired effect to support achievement of an objective.

Relationships and Integration

IO is not about ownership of individual capabilities but rather the use of those capabilities as force multipliers to create a desired effect. There are many military capabilities that contribute to IO and should be taken into consideration during the planning process. These include: strategic communication, joint interagency coordination group, public affairs, civil-military operations, cyberspace operations (CO), information assurance, space operations, military information support operations (MISO), intelligence, military deception, operations security, special technical operations, joint electromagnetic spectrum operations, and key leader engagement.

Legal Considerations

IO planners deal with legal considerations of an extremely diverse and complex nature. For this reason, joint IO planners should consult their staff judge advocate or legal advisor for expert advice.

Multinational Information Operations

Other Nations and Information Operations

Multinational partners recognize a variety of information concepts and possess sophisticated doctrine, procedures, and capabilities. Given these potentially diverse perspectives regarding IO, it is essential for the multinational force commander (MNFC) to resolve potential conflicts as soon as possible. It is vital to integrate multinational partners into IO planning as early as possible to gain agreement on an integrated and achievable IO strategy.

Information Operations Assessment  

Information Operations assessment is iterative, continuously repeating rounds of analysis within the operations cycle in order to measure the progress of information related capabilities toward achieving objectives.  

The Information Operations Assessment Process

Assessment of IO is a key component of the commander’s decision cycle, helping to determine the results of tactical actions in the context of overall mission objectives and providing potential recommendations for refinement of future plans. Assessments also provide opportunities to identify IRC shortfalls, changes in parameters and/or conditions in the information environment, which may cause unintended effects in the employment of IRCs, and resource issues that may be impeding joint IO effectiveness.

A solution to these assessment requirements is the eight-step assessment process.

  • Focused characterization of the information environment
  • Integrate information operations assessment into plans and develop the assessment plan
  • Develop information operations assessment information requirements and collection plans
  • Build/modify information operations assessment baseline
  • Coordinate and execute information operations and collection activities
  • Monitor and collect focused information environment data for information operations assessment
  • Analyze information operations assessment data
  • Report information operations assessment results and recommendations

 

 

Measures and Indicators

Measures of performance (MOPs) and measures of effectiveness (MOEs) help accomplish the assessment process by qualifying or quantifying the intangible attributes of the information environment. The MOP for any one action should be whether or not the TA was exposed to the IO action or activity. MOEs should be observable, to aid with collection; quantifiable, to increase objectivity; precise, to ensure accuracy; and correlated with the progress of the operation, to attain timeliness. Indicators are crucial because they aid the joint IO planner in informing MOEs and should be identifiable across the center of gravity critical factors.

CHAPTER I

OVERVIEW

“The most hateful human misfortune is for a wise man to have no influence.”

Greek Historian Herodotus, 484-425 BC

INTRODUCTION

  1. The growth of communication networks has decreased the number of isolated populations in the world. The emergence of advanced wired and wireless information technology facilitates global communication by corporations, violent extremist organizations, and individuals. The ability to share information in near real time, anonymously and/or securely, is a capability that is both an asset and a potential vulnerability to us, our allies, and our adversaries. Information is a powerful tool to influence, disrupt, corrupt, or usurp an adversary’s ability to make and share decisions.
  2. The instruments of national power (diplomatic, informational, military, and economic) provide leaders in the United States with the means and ways of dealing with crises around the world. Employing these means in the information environment requires the ability to securely transmit, receive, store, and process information in near real time. The nation’s state and non-state adversaries are equally aware of the significance of this new technology, and will use information-related capabilities (IRCs) to gain advantages in the information environment, just as they would use more traditional military technologies to gain advantages in other operational environments. These realities have transformed the information environment into a battlefield, which poses both a threat to the Department of Defense (DOD), combatant commands (CCMDs), and Service components and serves as a force multiplier when leveraged effectively.
  3. As the strategic environment continues to change, so does IO. Based on these changes, the Secretary of Defense now characterizes IO as the integrated employment, during military operations, of IRCs in concert with other lines of operation to influence, disrupt, corrupt, or usurp the decision making of adversaries and potential adversaries while protecting our own.

This revised characterization has led to a reassessment of how essential the information environment can be and how IRCs can be effectively integrated into joint operations to create effects and operationally exploitable conditions necessary for achieving the joint force commander’s (JFC’s) objectives.

  1. The Information Environment

The information environment is the aggregate of individuals, organizations, and systems that collect, process, disseminate, or act on information. This environment consists of three interrelated dimensions which continuously interact with individuals, organizations, and systems. These dimensions are the physical, informational, and cognitive (see Figure I-1).

The Physical Dimension. The physical dimension is composed of command and control (C2) systems, key decision makers, and supporting infrastructure that enable individuals and organizations to create effects. It is the dimension where physical platforms and the communications networks that connect them reside. The physical dimension includes, but is not limited to, human beings, C2 facilities, newspapers, books, microwave towers, computer processing units, laptops, smart phones, tablet computers, or any other objects that are subject to empirical measurement. The physical dimension is not confined solely to military or even nation-based systems and processes; it is a defused network connected across national, economic, and geographical boundaries.

The Informational Dimension. The informational dimension encompasses where and how information is collected, processed, stored, disseminated, and protected. It is the dimension where the C2 of military forces is exercised and where the commander’s intent is conveyed. Actions in this dimension affect the content and flow of information.

The Cognitive Dimension. The cognitive dimension encompasses the minds of those who transmit, receive, and respond to or act on information. It refers to individuals’ or groups’ information processing, perception, judgment, and decision making. These elements are influenced by many factors, to include individual and cultural beliefs, norms, vulnerabilities, motivations, emotions, experiences, morals, education, mental health, identities, and ideologies. Defining these influencing factors in a given environment is critical for understanding how to best influence the mind of the decision maker and create the desired effects. As such, this dimension constitutes the most important component of the information environment.

The Information and Influence Relational Framework and the Application of Information-Related Capabilities

IRCs are the tools, techniques, or activities that affect any of the three dimensions of the information environment. They affect the ability of the target audience (TA) to collect, process, or disseminate information before and after decisions are made. The TA is the individual or group selected for influence.

The change in the TA conditions, capabilities, situational awareness, and in some cases, the inability to make and share timely and informed decisions, contributes to the desired end state. Actions or inactions in the physical dimension can be assessed for future operations. The employment of IRCs is complemented by a set of capabilities such as operations security (OPSEC), information assurance (IA), counter-deception, physical security, electronic warfare (EW) support, and electronic protection. These capabilities are critical to enabling and protecting the JFC’s C2 of forces. Key components in this process are:

(1) Information. Data in context to inform or provide meaning for action.

(2) Data. Interpreted signals that can reduce uncertainty or equivocality.

(3) Knowledge. Information in context to enable direct action. Knowledge can be further broken down into the following:

(a) Explicit Knowledge. Knowledge that has been articulated through words, diagrams, formulas, computer programs, and like means.

(b) Tacit Knowledge. Knowledge that cannot be or has not been articulated through words, diagrams, formulas, computer programs, and like means.

(4) Influence. The act or power to produce a desired outcome or end on a TA.

(5) Means. The resources available to a national government, non-nation actor, or adversary in pursuit of its end(s). These resources include, but are not limited to, public- and private-sector enterprise assets or entities.

(6) Ways. How means can be applied, in order to achieve a desired end(s). They can be characterized as persuasive or coercive.

(7) Information-Related Capabilities. Tools, techniques, or activities using data, information, or knowledge to create effects and operationally desirable conditions within the physical, informational, and cognitive dimensions of the information environment.

(8) Target Audience. An individual or group selected for influence. (9) Ends. A consequence of the way of applying IRCs.

(10) Using the framework, the physical, informational, and cognitive dimensions of the information environment provide access points for influencing TAs (see Figure I-2).

  1. The purpose of integrating the employment of IRCs is to influence a TA. While the behavior of individuals and groups, as human social entities, are principally governed by rules, norms, and beliefs, the behaviors of systems principally reside within the physical and informational dimensions and are governed only by rules. Under this construct, rules, norms, and beliefs are:

(1) Rules. Explicit regulative processes such as policies, laws, inspection routines, or incentives. Rules function as a coercive regulator of behavior and are dependent upon the imposing entity’s ability to enforce them.

(2) Norms. Regulative mechanisms accepted by the social collective. Norms are enforced by normative mechanisms within the organization and are not strictly dependent upon law or regulation.

(3) Beliefs. The collective perception of fundamental truths governing behavior. The adherence to accepted and shared beliefs by members of a social system will likely persist and be difficult to change over time. Strong beliefs about determinant factors (i.e., security, survival, or honor) are likely to cause a social entity or group to accept rules and norms.

  1. The first step in achieving an end(s) through use of the information-influence relational framework is to identify the TA. Once the TA has been identified, it will be necessary to develop an understanding of how that TA perceives its environment, to include analysis of TA rules, norms, and beliefs. Once this analysis is complete, the application of means available to achieve the desired end(s) must be evaluated (see Figure I-3). Such means may include (but are not limited to) diplomatic, informational, military, or economic actions, as well as academic, commercial, religious, or ethnic pronouncements. When the specific means or combinations of means are determined, the next step is to identify the specific ways to create a desired effect.
  2. Influencing the behavior of TAs requires producing effects in ways that modify rules, norms, or beliefs. Effects can be created by means (e.g., governmental, academic, cultural, and private enterprise) using specific ways (i.e., IRCs) to affect how the TAs collect, process, perceive, disseminate, and act (or do not act) on information
  3. Upon deciding to persuade or coerce a TA, the commander must then determine what IRCs it can apply to individuals, organizations, or systems in order to produce a desired effect(s) (see Figure I-5). As stated, IRCs can be capabilities, techniques, or activities, but they do not necessarily have to be technology-based. Additionally, it is important to focus on the fact that IRCs may come from a wide variety of sources. Therefore, in IO, it is not the ownership of the capabilities and techniques that is important, but rather their integrated application in order to achieve a JFC’s end state.

(10) Using the framework, the physical, informational, and cognitive dimensions of the information environment provide access points for influencing TAs

  1. The purpose of integrating the employment of IRCs is to influence a TA. While the behavior of individuals and groups, as human social entities, are principally governed by rules, norms, and beliefs, the behaviors of systems principally reside within the physical and informational dimensions and are governed only by rules. Under this construct, rules, norms, and beliefs are:

(1) Rules. Explicit regulative processes such as policies, laws, inspection routines, or incentives. Rules function as a coercive regulator of behavior and are dependent upon the imposing entity’s ability to enforce them.

(2) Norms. Regulative mechanisms accepted by the social collective. Norms are enforced by normative mechanisms within the organization and are not strictly dependent upon law or regulation.

(3) Beliefs. The collective perception of fundamental truths governing behavior. The adherence to accepted and shared beliefs by members of a social system will likely persist and be difficult to change over time. Strong beliefs about determinant factors (i.e., security, survival, or honor) are likely to cause a social entity or group to accept rules and norms.

  1. The first step in achieving an end(s) through use of the information-influence relational framework is to identify the TA. Once the TA has been identified, it will be necessary to develop an understanding of how that TA perceives its environment, to include analysis of TA rules, norms, and beliefs. Once this analysis is complete, the application of means available to achieve the desired end(s) must be evaluated.

Such means may include (but are not limited to) diplomatic, informational, military, or economic actions, as well as academic, commercial, religious, or ethnic pronouncements. When the specific means or combinations of means are determined, the next step is to identify the specific ways to create a desired effect.

  1. InfluencingthebehaviorofTAsrequiresproducingeffectsinwaysthatmodifyrules, norms, or beliefs. Effects can be created by means (e.g., governmental, academic, cultural, and private enterprise) using specific ways (i.e., IRCs) to affect how the TAs collect, process, perceive, disseminate, and act (or do not act) on information (see Figure I-4).
  2. Upon deciding to persuade or coerce a TA, the commander must then determine what IRCs it can apply to individuals, organizations, or systems in order to produce a desired effect(s) (see Figure I-5). As stated, IRCs can be capabilities, techniques, or activities, but they do not necessarily have to be technology-based. Additionally, it is important to focus on the fact that IRCs may come from a wide variety of sources. Therefore, in IO, it is not the ownership of the capabilities and techniques that is important, but rather their integrated application in order to achieve a JFC’s end state.

CHAPTER II

INFORMATION OPERATIONS

“There is a war out there, old friend- a World War. And it’s not about whose got the most bullets; it’s about who controls the information.”

Cosmo, in the 1992 Film “Sneakers”

  1. Introduction

This chapter addresses how the integrating and coordinating functions of IO help achieve a JFC’s objectives.

  1. Terminology
  2. Because IO takes place in all phases of military operations, in concert with other lines of operation and lines of effort, some clarification of the terms and their relationship to IO is in order.

(1) Military Operations. The US military participates in a wide range of military operations, as illustrated in Figure II-1. Phase 0 (Shape) and phase I (Deter) may include defense support of civil authorities, peace operations, noncombatant evacuation, foreign humanitarian assistance, and nation-building assistance, which fall outside the realm of major combat operations represented by phases II through V.

(2) Lines of Operation and Lines of Effort. IO should support multiple lines of operation and at times may be the supported line of operation. IO may also support numerous lines of effort when positional references to an enemy or adversary have little relevance, such as in counterinsurgency or stability operations.

  1. IO integrates IRCs (ways) with other lines of operation and lines of effort (means) to create a desired effect on an adversary or potential adversary to achieve an objective (ends).
  2. Information Operations and the Information-Influence Relational Framework

Influence is at the heart of diplomacy and military operations, with integration of IRCs providing a powerful means for influence. The relational framework describes the application, integration, and synchronization of IRCs to influence, disrupt, corrupt, or usurp the decision making of TAs to create a desired effect to support achievement of an objective.

  1. The Information Operations Staff and Information Operations Cell

Within the joint community, the integration of IRCs to achieve the commander’s objectives is managed through an IO staff or IO cell. JFCs may establish an IO staff to provide command-level oversight and collaborate with all staff directorates and supporting organizations on all aspects of IO.

APPLICATION OF INFORMATION-RELATED CAPABILITIES TO THE INFORMATION AND INFLUENCE RELATIONAL FRAMEWORK

This example provides insight as to how information-related capabilities (IRCs) can be used to create lethal and nonlethal effects to support achievement of the objectives to reach the desired end state. The integration and synchronization of these IRCs require participation from not just information operations planners, but also organizations across multiple lines of operation and lines of effort. They may also include input from or coordination with national ministries, provincial governments, local authorities, and cultural and religious leaders to create the desired effect.

Situation: An adversary is attempting to overthrow the government of Country X using both lethal and nonlethal means to demonstrate to the citizens that the government is not fit to support and protect its people.

Joint Force Commander’s Objective: Protect government of Country X from being overthrown.

Desired Effects:

  1. Citizens have confidence in ability of government to support and protect its people.
  2. Adversary is unable to overthrow government of Country X.

Potential Target Audience(s):

  1. Adversary leadership (adversary).
  2. Country X indigenous population (friendly, neutral, and potential adversary).

Potential Means available to achieve the commander’s objective:

  • Diplomatic action (e.g., demarche, public diplomacy)
  •  Informational assets (e.g., strategic communication, media)
  •  Military forces (e.g., security force assistance, combat operations, military information support operations, public affairs, military deception)
  •  Economic resources (e.g., sanctions against the adversary, infusion of capital to Country X for nation building)
  •  Commercial, cultural, or other private enterprise assets

Potential Ways (persuasive communications or coercive force):

  •  Targeted radio and television broadcasts
  •  Blockaded adversary ports
  •  Government/commercially operated Web sites
  •  Key leadership engagement

Regardless of the means and ways employed by the players within the information environment, the reality is that the strategic advantage rests with whoever applies their means and ways most efficiently.

  1. IO Staff

(1) In order to provide planning support, the IO staff includes IO planners and a complement of IRCs specialists to facilitate seamless integration of IRCs to support the JFC’s concept of operations (CONOPS).

(2) IRC specialists can include, but are not limited to, personnel from the EW, cyberspace operations (CO), military information support operations (MISO), civil-military operations (CMO), military deception (MILDEC), intelligence, and public affairs (PA) communities. They provide valuable linkage between the planners within an IO staff and those communities that provide IRCs to facilitate seamless integration with the JFC’s objectives.

  1. IO Cell

(1) The IO cell integrates and synchronizes IRCs, to achieve national or combatant commander (CCDR) level objectives.

  1. Relationships and Integration
  2. IO is not about ownership of individual capabilities but rather the use of those capabilities as force multipliers to create a desired effect.

(1) Strategic Communication (SC)

(a) The SC process consists of focused United States Government (USG) efforts to create, strengthen, or preserve conditions favorable for the advancement of national interests, policies, and objectives by understanding and engaging key audiences through the use of coordinated programs, plans, themes, messages, and products synchronized with the actions of all instruments of national power.

(b) The elements and organizations that implement strategic guidance, both internal and external to the joint force, must not only understand and be aware of the joint force’s IO objectives; they must also work closely with members of the interagency community, in order to ensure full coordination and synchronization of USG efforts.

(2) Joint Interagency Coordination Group. Interagency coordination occurs between DOD and other USG departments and agencies, as well as with private-sector entities, nongovernmental organizations, and critical infrastructure activities, for the purpose of accomplishing national objectives. Many of these objectives require the combined and coordinated use of the diplomatic, informational, military, and economic instruments of national power.

(3) Public Affairs

(a) PA comprises public information, command information, and public engagement activities directed toward both the internal and external publics with interest in DOD. External publics include allies, neutrals, adversaries, and potential adversaries. When addressing external publics, opportunities for overlap exist between PA and IO.

(b) By maintaining situational awareness between IO and PA the potential for information conflict can be minimized. The IO cell provides an excellent place to coordinate IO and PA activities that may affect the adversary or potential adversary. Because there will be situations, such as counterpropaganda, in which the TA for both IO and PA converge, close cooperation and deconfliction are extremely important. …final coordination should occur within the joint planning group (JPG).

(4) Civil-Military Operations

(a) CMO is another area that can directly affect and be affected by IO. CMO activities establish, maintain, influence, or exploit relations between military forces, governmental and nongovernmental civilian organizations and authorities, and the civilian populace in a friendly, neutral, or hostile operational area in order to achieve US objectives. These activities may occur prior to, during, or subsequent to other military operations.

(b) Although CMO and IO have much in common, they are distinct disciplines.

The TA for much of IO is the adversary; however, the effects of IRCs often reach supporting friendly and neutral populations as well. In a similar vein, CMO seeks to affect friendly and neutral populations, although adversary and potential adversary audiences may also be affected. This being the case, effective integration of CMO with other IRCs is important, and a CMO representative on the IO staff is critical. The regular presence of a CMO representative in the IO cell will greatly promote this level of coordination.

(5) Cyberspace Operations

(a) Cyberspace is a global domain within the information environment consisting of the interdependent network of information technology infrastructures and resident data, including the Internet, telecommunications networks, computer systems, and embedded processors and controllers.

(b) As a process that integrates the employment of IRCs across multiple lines of effort and lines of operation to affect an adversary or potential adversary decision maker, IO can target either the medium (a component within the physical dimension such as a microwave tower) or the message itself (e.g., an encrypted message in the informational dimension). CO is one of several IRCs available to the commander.

(6) Information Assurance. IA is necessary to gain and maintain information superiority. The JFC relies on IA to protect infrastructure to ensure its availability, to position information for influence, and for delivery of information to the adversary.

(7) Space Operations. Space capabilities are a significant force multiplier when integrated with joint operations. Space operations support IO through the space force enhancement functions of intelligence, surveillance, and reconnaissance; missile warning; environmental monitoring; satellite communications; and spacebased positioning, navigation, and timing.

(8) Military Information Support Operations. MISO are planned operations to convey selected information and indicators to foreign audiences to influence their emotions, motives, objective reasoning, and ultimately the behavior of foreign governments, organizations, groups, and individuals. MISO focuses on the cognitive dimension of the information environment where its TA includes not just potential and actual adversaries, but also friendly and neutral populations.

MISO are applicable to a wide range of military operations such as stability operations, security cooperation, maritime interdiction, noncombatant evacuation, foreign humanitarian operations, counterdrug, force protection, and counter-trafficking.

(9) Intelligence

(a) Intelligence is a vital military capability that supports IO. The utilization of information operations intelligence integration (IOII) greatly facilitates understanding the interrelationship between the physical, informational, and cognitive dimensions of the information environment.

(b) By providing population-centric socio-cultural intelligence and physical network lay downs, including the information transmitted via those networks, intelligence can greatly assist IRC planners and IO integrators in determining the proper effect to elicit the specific response desired. Intelligence is an integrated process, fusing collection, analysis, and dissemination to provide products that will expose a TA’s potential capabilities or vulnerabilities. Intelligence uses a variety of technical and nontechnical tools to assess the information environment, thereby providing insight into a TA.

(c) A joint intelligence support element (JISE) may establish an IO support office (see Figure II-5) to provide IOII. This is due to the long lead time needed to establish information baseline characterizations, provide timely intelligence during IO planning and execution efforts, and to properly assess effects in the information environment.

(10) Military Deception

(a) One of the oldest IRCs used to influence an adversary’s perceptions is MILDEC. MILDEC can be characterized as actions executed to deliberately mislead adversary decision makers, creating conditions that will contribute to the accomplishment of the friendly mission. While MILDEC requires a thorough knowledge of an adversary or potential adversary’s decision-making processes, it is important to remember that it is focused on desired behavior. It is not enough to simply mislead the adversary or potential adversary; MILDEC is designed to cause them to behave in a manner advantageous to the friendly mission, such as misallocation of resources, attacking at a time and place advantageous to friendly forces, or avoid taking action at all.

(b) When integrated with other IRCs, MILDEC can be a particularly powerful way to affect the decision-making processes of an adversary or potential adversary.

(11) Operations Security

(a) OPSEC is a standardized process designed to meet operational needs by mitigating risks associated with specific vulnerabilities in order to deny adversaries critical information and observable indicators. OPSEC identifies critical information and actions attendant to friendly military operations to deny observables to adversary intelligence systems.

(b) The effective application, coordination, and synchronization of other IRCs are critical components in the execution of OPSEC. Because a specified IO task is “to protect our own” decision makers, OPSEC planners require complete situational awareness, regarding friendly activities to facilitate the safeguarding of critical information. This kind of situational awareness exists within the IO cell, where a wide range of planners work in concert to integrate and synchronize their actions to achieve a common IO objective.

(12) Special Technical Operations (STO). IO need to be deconflicted and synchronized with STO. Detailed information related to STO and its contribution to IO can be obtained from the STO planners at CCMD or Service component headquarters. IO and STO are separate, but have potential crossover, and for this reason an STO planner is a valuable member of the IO cell.

(14) Key Leader Engagement (KLE)

(a) KLEs are deliberate, planned engagements between US military leaders and the leaders of foreign audiences that have defined objectives, such as a change in policy or supporting the JFC’s objectives. These engagements can be used to shape and influence foreign leaders at the strategic, operational, and tactical levels, and may also be directed toward specific groups such as religious leaders, academic leaders, and tribal leaders; e.g., to solidify trust and confidence in US forces.

(b) KLEs may be applicable to a wide range of operations such as stability operations, counterinsurgency operations, noncombatant evacuation operations, security cooperation activities, and humanitarian operations. When fully integrated with other IRCs into operations, KLEs can effectively shape and influence the leaders of foreign audiences.

  1. The capabilities discussed above do not constitute a comprehensive list of all possible capabilities that can contribute to IO. This means that individual capability ownership will be highly diversified. The ability to access these capabilities will be directly related to how well commanders understand and appreciate the importance of IO.

CHAPTER III

AUTHORITIES, RESPONSIBILITIES, AND LEGAL CONSIDERATIONS

Introduction

This chapter describes the JFC’s authority for the conduct of IO; delineates various roles and responsibilities established in DODD 3600.01, Information Operations; and addresses legal considerations in the planning and execution of IO.

Authorities

The authority to employ IRCs is rooted foremost in Title 10, United States Code (USC). While Title 10, USC, does not specify IO separately, it does provide the legal basis for the roles, missions, and organization of DOD and the Services.

Responsibilities

Under Secretary of Defense for Policy (USD[P]). The USD(P) oversees and manages DOD-level IO programs and activities. In this capacity, USD(P) manages guidance publications (e.g., DODD 3600.01) and all IO policy on behalf of the Secretary of Defense. The office of the USD(P) coordinates IO for all DOD components in the interagency process.

Under Secretary of Defense for Intelligence (USD[I]). USD(I) develops, coordinates, and oversees the implementation of DOD intelligence policy, programs, and guidance for intelligence activities supporting IO.

Joint Staff. In accordance with the Secretary of Defense memorandum on Strategic Communication and Information Operations in the DOD, dated 25 January 2011, the Joint Staff is assigned the responsibility for joint IO proponency. CJCS responsibilities for IO are both general (such as establishing doctrine, as well as providing advice, and recommendations to the President and Secretary of Defense) and specific (e.g., joint IO policy).

Joint Information Operations Warfare Center (JIOWC). The JIOWC is a CJCS- controlled activity reporting to the operations directorate of a joint staff (J-3) via J-39 DDGO.

JIOWC’s specific organizational responsibilities include:

(1) Provide IO subject matter experts and advice to the Joint Staff and the CCMDs.
(2) Develop and maintain a joint IO assessment framework.
(3) Assist the Joint IO Proponent in advocating for and integrating CCMD IO requirements.
(4) Upon the direction of the Joint IO Proponent, provide support in coordination and integration of DOD IRCs for JFCs, Service component commanders, and DOD agencies.

Combatant Commands. The Unified Command Plan provides guidance to CCDRs, assigning them missions and force structure, as well as geographic or functional areas of responsibility. In addition to these responsibilities, the Commander, United States Special Operations Command, is also responsible for integrating and coordinating MISO.

Functional Component Commands. Like Service component commands, functional component commands have authority over forces or in the case of IO, IRCs, as delegated by the establishing authority (normally a CCDR or JFC). Functional component commands may be tasked to plan and execute IO as an integrated part of joint operations.

Legal Considerations

Introduction. US military activities in the information environment, as with all military operations, are conducted as a matter of law and policy. Joint IO will always involve legal and policy questions, requiring not just local review, but often nationallevel coordination and approval. The US Constitution, laws, regulations, and policy, and international law set boundaries for all military activity, to include IO.

Legal Considerations. IO planners deal with legal considerations of an extremely diverse and complex nature. Legal interpretations can occasionally differ, given the complexity of technologies involved, the significance of legal interests potentially affected, and the challenges inherent for law and policy to keep pace with the technological changes and implementation of IRCs.

Implications Beyond the JFC. Bilateral agreements to which the US is a signatory may have provisions concerning the conduct of IO as well as IRCs when they are used in support of IO. IO planners at all levels should consider the following broad areas within each planning iteration in consultation with the appropriate legal advisor:

(1) Could the execution of a particular IRC be considered a hostile act by an adversary or potential adversary?

(2) Do any non-US laws concerning national security, privacy, or information exchange, criminal and/or civil issues apply?

(3) What are the international treaties, agreements, or customary laws recognized by an adversary or potential adversary that apply to IRCs?

(4) How is the joint force interacting with or being supported by US intelligence organizations and other interagency entities?

CHAPTER IV

INTEGRATING INFORMATION-RELATED CAPABILITIES INTO THE JOINT OPERATION PLANNING PROCESS

“Support planning is conducted in parallel with other planning and encompasses such essential factors as IO [information operations], SC [strategic communication]…”

Joint Publication 5-0, Joint Operation Planning, 11 August 201

Introduction

The IO cell chief is responsible to the JFC for integrating IRCs into the joint operation planning process (JOPP). Thus, the IO staff is responsible for coordinating and synchronizing IRCs to accomplish the JFC’s objectives. Coordinated IO are essential in employing the elements of operational design. Conversely, uncoordinated IO efforts can compromise, complicate, negate, and pose risks to the successful accomplishment of the JFC and USG objectives. Additionally, when uncoordinated, other USG and/or multinational information activities, may complicate, defeat, or render DOD IO ineffective. For this reason, the JFC’s objectives require early detailed IO staff planning, coordination, and deconfliction between the USG and partner nations’ efforts within the AOR, in order to effectively synchronize and integrate IRCs.

Information Operations Planning

The IO cell and the JPG. The IO cell chief ensures joint IO planners adequately represent the IO cell within the JPG and other JFC planning processes. Doing so will help ensure that IRCs are integrated with all planning efforts. Joint IO planners should be integrated with the joint force planning, directing, monitoring, and assessing process.

IO Planning Considerations

(1) IOplannersseektocreateanoperationaladvantagethatresultsincoordinated effects that directly support the JFC’s objectives. IRCs can be executed throughout the operational environment, but often directly impact the content and flow of information.

(2) IO planning begins at the earliest stage of JOPP and must be an integral part of, not an addition to, the overall planning effort. IRCs can be used in all phases of a campaign or operation, but their effective employment during the shape and deter phases can have a significant impact on remaining phases.

(3) The use of IO to achieve the JFC’s objectives requires the ability to integrate IRCs and interagency support into a comprehensive and coherent strategy that supports the JFC’s overall mission objectives. The GCC’s theater security cooperation guidance contained in the theater campaign plan (TCP) serves as an excellent platform to embed specific long-term information objectives during phase 0 operations.

(4) Many IRCs require long lead time for development of the joint intelligence preparation of the operational environment (JIPOE) and release authority. The intelligence directorate of a joint staff (J-2) identifies intelligence and information gaps, shortfalls, and priorities as part of the JIPOE process in the early stages of the JOPP. Concurrently, the IO cell must identify similar intelligence gaps in its understanding of the information environment to determine if it has sufficient information to successfully plan IO. Where identified shortfalls exist, the IO cell may need to work with J-2 to submit requests for information (RFIs) to the J-2 to fill gaps that cannot be filled internally.

(5) There may be times where the JFC may lack sufficient detailed intelligence data and intelligence staff personnel to provide IOII. Similarly, a JFC’s staff may lack dedicated resources to provide support. For this reason, it is imperative the IO cell take a proactive approach to intelligence support. The IO cell must also review and provide input to the commander’s critical information requirements (CCIRs), especially priority intelligence requirements (PIRs) and information requirements.

The joint intelligence staff, using PIRs as a basis, develops information requirements that are most critical. These are also known as essential elements of information (EEIs). In the course of mission analysis, the intelligence analyst identifies the intelligence required to CCIRs. Intelligence staffs develop more specific questions known as information requirements. EEIs pertinent to the IO staff may include target information specifics, such as messages and counter-messages, adversary propaganda, and responses of individuals, groups, and organizations to adversary propaganda.

IO and the Joint Operation Planning Process

Throughout JOPP, IRCs are integrated with the JFC’s overall CONOPS

(1) Planning Initiation. Integration of IRCs into joint operations should begin at step 1, planning initiation. Key IO staff actions during this step include the following:

(a) Review key strategic documents.

(b) Monitor the situation, receive initial planning guidance, and review staff estimates from applicable operation plans (OPLANs) and concept plans (CONPLANs).

(c) Alert subordinate and supporting commanders of potential tasking with regard to IO planning support.

(d) Gauge initial scope of IO required for the operation.

(e) Identify location, standard operating procedures, and battle rhythm of other staff organizations that require integration and divide coordination responsibilities among the IO staff.

(f) Identify and request appropriate authorities.

(g) Begin identifying information required for mission analysis and courseof action (COA) development.

(h) Identify IO planning support requirements (including staff augmentation, support products, and services) and issue requests for support according to procedures established locally and by various supporting organizations.

(i) Validate, initiate, and revise PIRs and RFIs, keeping in mind the long lead times associated with satisfying IO requirements.

(j) Provide IO input and recommendations to COAs, and provide resolutions to conflicts that exist with other plans or lines of operation.

(k) In coordination with the targeting cell, submit potential candidate targets to JFC or component joint targeting coordination board (JTCB). For vetting, validation, and deconfliction follow local targeting cell procedures because these three separate processes do not always occur at the JTCB.

(l) Ensure IO staff and IO cell members participate in all JFC or component planning and targeting sessions and JTCBs.

(2) Mission Analysis. The purpose of step 2, mission analysis, is to understand the problem and purpose of an operation and issue the appropriate guidance to drive the remaining steps of the planning process. The end state of mission analysis is a clearly defined mission and thorough staff assessment of the joint operation. Mission analysis orients the JFC and staff on the problem and develops a common understanding, before moving forward in the planning process.

As IO impacts each element of the operational environment, it is important for the IO staff and IO cell during mission analysis to remain focused on the information environment. Key IO staff actions during mission analysis are:

(a) Assist the J-3 and J-2 in the identification of friendly and adversary center(s) of gravity and critical factors (e.g., critical capabilities, critical requirements, and critical vulnerabilities).
(b) Identify relevant aspects of the physical, informational, and cognitive dimensions (whether friendly, neutral, adversary, or potential adversary) of the information environment.
(c) Identify specified, implied, and essential tasks.
(d) Identify facts, assumptions, constraints, and restraints affecting IO planning.
(e) Analyze IRCs available to support IO and authorities required for their employment.
(f) Develop and refine proposed PIRs, RFIs, and CCIRs.
(g) Conduct initial IO-related risk assessment.
(h) Develop IO mission statement.
(i) Begin developing the initial IO staff estimate. This estimate forms the basis for the IO cell chief’s recommendation to the JFC, regarding which COA it can best support.
(j) Conduct initial force allocation review.
(k) Identify and develop potential targets and coordinate with the targeting cell no later than the end of target development. Compile and maintain target folders in the Modernized Integrated Database. Coordinate with the J-2 and targeting cell for participation and representation in vetting, validation, and targeting boards
(l) Develop mission success criteria.

(3) COA Development. Output from mission analysis, such as initial staff estimates, mission and tasks, and JFC planning guidance are used in step 3, COA development. Key IO staff actions during this step include the following:

(a) Identify desired and undesired effects that support or degrade JFC’s information objectives.

(b) Developmeasuresofeffectiveness(MOEs)andmeasuresofeffectiveness indicators (MOEIs).

(c) Develop tasks for recommendation to the J-3.

(d) RecommendIRCsthatmaybeusedtoaccomplishsupportinginformation tasks for each COA.

(e) Analyze required supplemental rules of engagement (ROE). (f) Identify additional operational risks and controls/mitigation. (g) Develop the IO CONOPS narrative/sketch.
(h) Synchronize IRCs in time, space, and purpose.

(i) Continue update/development of the IO staff estimate.

(j) Prepare inputs to the COA brief.

(k) Provide inputs to the target folder.

(4) COA Analysis and War Gaming. Based upon time available, the JFC staff should war game each tentative COA against adversary COAs identified through the JIPOE process. Key IO staff and IO cell actions during this step include the following:

(a) Analyze each COA from an IO functional perspective.

(b) Reveal key decision points.

(c) Recommend task adjustments to IRCs as appropriate.

(d) Provide IO-focused data for use in a synchronization matrix or other decision-making tool.

(e) Identify IO portions of branches and sequels.
(f) Identify possible high-value targets related to IO. (g) Submit PIRs and recommend CCIRs for IO.

(h) Revise staff estimate.

(i) Assess risk.

(5) COA Comparison. Step 5, COA comparison, starts with all staff elements analyzing and evaluating the advantages and disadvantages of each COA from their respective viewpoints. Key IO staff and IO cell actions during this step include the following:

(a) Compare each COA based on mission and tasks.

(b) Compare each COA in relation to IO requirements versus available IRCs.

(c) Prioritize COAs from an IO perspective.

(d) Revise the IO staff estimate. During execution, the IO cell should maintain an estimate and update as required.

(6) COA Approval. Just like other elements of the JFC’s staff, during step 6, COA approval, the IO staff provides the JFC with a clear recommendation of how IO can best contribute to mission accomplishment in the COA(s) being briefed. It is vital this recommendation is presented in a clear, concise manner that is not only able to be quickly grasped by the JFC, but can also be easily understood by peer, subordinate, and higher- headquarters command and staff elements. Failure to foster such an understanding of IO contribution to the approved COA can lead to poor execution and/or coordination of IRCs in subsequent operations.

(7) Plan or Order Development. Once a COA is selected and approved, the IO staff develops appendix 3 (Information Operations) to annex C (Operations) of the operation order (OPORD) or OPLAN. Because IRC integration is documented elsewhere in the OPORD or OPLAN, it is imperative that the IO staff conduct effective staff coordination within the JPG during step 7, plan or order development. Key staff actions during this step include the following:

(a) Refine tasks from the approved COA.

(b) Identify shortfalls of IRCs and recommend solutions.

(c) Facilitate development of supporting plans by keeping the responsible organizations informed of relevant details (as access restrictions allow) throughout the planning process.

(d) Advise the supported commander on IO issues and concerns during the supporting plan review and approval process.

(e) Participate in time-phased force and deployment data refinement to ensure IO supports the OPLAN or CONPLAN.

(f) Assist in the development of OPLAN or CONPLAN appendix 6 (IO Intelligence Integration) to annex B (Intelligence).

  1. Plan Refinement. The information environment is continuously changing and it is critical for IO planners to remain in constant interaction with the JPG to provide updates to OPLANs or CONPLANs.
  2. Assessment of IO. Assessment is integrated into all phases of the planning and execution cycle, and consists of assessment activities associated with tasks, events, or programs in support of joint military operations. Assessment seeks to analyze and inform on the performance and effectiveness of activities. The intent is to provide relevant feedback to decision makers in order to modify activities that achieve desired results. Assessment can also provide the programmatic community with relevant information that informs on return on investment and operational effectiveness of DOD IRCs. It is important to note that integration of assessment into planning is the first step of the assessment process. Planning for assessment is part of broader operational planning, rather than an afterthought. Iterative in nature, assessment supports the Adaptive Planning and Execution process, and provides feedback to operations and ultimately, IO enterprise programmatics.
  3. Relationship Between Measures of Performance (MOPs) and MOEs. Effectiveness assessment is one of the greatest challenges facing a staff. Despite the continuing evolution of joint and Service doctrine and the refinement of supporting tactics, techniques, and procedures, assessing the effectiveness of IRCs continues to be challenging.

(1) MOPs are criteria used to assess friendly accomplishment of tasks and mission execution.

Examples of Measures of Performance Feedback

  • Numbers of populace listening to military information support operations (MISO) broadcasts
  • Percentage of adversary command and control facilities attacked
  • Number of civil-military operations projects initiated/number of projects completed
  • Human intelligence reports number of MISO broadcasts during Commando Solo missions
  • Intelligence assessments (human intelligence, etc.)
  • Open source intelligence
  • Internet (newsgroups, etc.)
  • Military information support operations, and civil-military operations teams (face to face activities)
  • Contact with the public
  • Press inquiries and comments
  • Department of State polls, reports and surveys (reports)
  • Open Source Center
  • Nongovernmental organizations, intergovernmental organizations, international organizations, and host nation organizations
  • Foreign policy advisor meetings
  • Commercial polls
  • Operational analysis cells

(2) In contrast to MOPs, MOEs are criteria used to assess changes in system behavior, capability, or operational environment that are tied to measuring the attainment of an end state, achievement of an objective, or creation of an effect. Ultimately, MOEs determine whether actions being executed are creating desired effects, thereby accomplishing the JFC’s information objectives and end state.

(3) MOEs and MOPs are both crafted and refined throughout JOPP. In developing MOEs and/or MOPs, the following general criteria should be considered:

(a) Ends Related. MOEs and/or MOPs should directly relate to the objectives and desired tasks required to accomplish effects and/or performance.

(b) Measurable. MOEs should be specific, measurable, and observable. Effectiveness or performance is measured either quantitatively (e.g., counting the number of attacks) or qualitatively (e.g., subjectively evaluating the level of confidence in the security forces). In the case of MOEs, a baseline measurement must be established prior to the execution, against which to measure system changes.

(c) Timely. A time for required feedback should be clearly stated for each MOE and/or MOP and a plan made to report within that specified time period.

(d) Properly Resourced. The collection, analysis, and reporting of MOE or MOP data requires personnel, financial, and materiel resources. The IO staff or IO cell

should ensure that these resource requirements are built into IO planning during COA development and closely coordinated with the J-2 collection manager to ensure the means to assess these measures are in place.

(4) Measure of Effectiveness Indicators. An MOEI is a unit, location, or event observed or measured, that can be used to assess an MOE. These are often used to add quantitative data points to qualitative MOEs and can assist an IO staff or IO cell in answering a question related to a qualitative MOE. The identification of MOEIs aids the IO staff or IO cell in determining an MOE and can be identified from across the information environment. MOEIs can be independently weighted for their contribution to an MOE and should be based on separate criteria. Hundreds of MOEIs may be needed for a large scale contingency. Examples of how effects can be translated into MOEIs include the following:

(a) Effect: Increase in the city populace’s participation in civil governance.

MOE: (Qualitative) Metropolitan citizens display increased support for the democratic leadership elected on 1 July. (What activity trends show progress toward or away from the desired behavior?)

MOEI:

  1. A decrease in the number of anti-government rallies/demonstrations in a city since 1 July (this indicator might be weighted heavily at 60 percent of this MOE’s total assessment based on rallies/demonstrations observed.)
  2. An increase in the percentage of positive new government media stories since 1 July (this indicator might be weighted less heavily at 20 percent of this MOE’s total assessment based on media monitoring.)
  3. An increase in the number of citizens participating in democratic functions since 1 July (this indicator might be weighted at 20 percent of this MOE’s total assessment based on government data/criteria like voter registration, city council meeting attendance, and business license registration.)

(b) Effect: Insurgent leadership does not orchestrate terrorist acts in the western region.

  1. MOE: (Qualitative) Decrease in popular support toward extremists and insurgents.
  2. MOEI:
  3. An increase in the number of insurgents turned in/identified since1 October.
  4. The percentage of blogs supportive of the local officials.
  5. Information Operations Phasing and Synchronization

Through its contributions to the GCC’s TCP, it is clear that joint IO is expected to play a major role in all phases of joint operations. This means that the GCC’s IO staff and IO cell must account for logical transitions from phase to phase, as joint IO moves from the main effort to a supporting effort. Regardless of what operational phase may be underway, it is always important for the IO staff and IO cell to determine what legal authorities the JFC requires to execute IRCs during the subsequent operations phase.

  1. Phase 0–Shape. Joint IO planning should focus on supporting the TCP to deter adversaries and potential adversaries from posing significant threats to US objectives. Joint IO planners should access the JIACG through the IO cell or staff. Joint IO planning during this phase will need to prioritize and integrate efforts and resources to support activities throughout the interagency. Due to competing resources and the potential lack of available IRCs, executing joint IO during phase 0 can be challenging. For this reason, the IO staff and IO cell will need to consider how their IO activities fit in as part of a whole-of-government approach to effectively shape the information environment to achieve the CCDR’s information objectives.
  2. Phase I–Deter. During this phase, joint IO is often the main effort for the CCMD. Planning will likely emphasize the JFC’s flexible deterrent options (FDOs), complementing US public diplomacy efforts, in order to influence a potential foreign adversary decision maker to make decisions favorable to US goals and objectives. Joint IO planning for this phase is especially complicated because the FDO typically must have a chance to work, while still allowing for a smooth transition to phase II and more intense levels of conflict, if it does not. Because the transition from phase I to phase II may not allow enough time for application of IRCs to create the desired effects on an adversary or potential adversary, the phase change may be abrupt.
  3. Phase II-Seize Initiative. In phase II, joint IO is supporting multiple lines of operation. Joint IO planning during phase II should focus on maximizing synchronized IRC effects to support the JFC’s objectives and the component missions while preparing the transition to the next phase.
  4. Phase III–Dominate. Joint IO can be a supporting and/or a supported line of operation during phase III. Joint IO planning during phase III will involve developing an information advantage across multiple lines of operation to execute the mission.
  5. Phase IV–Stabilize. CMO, or even IO, is likely the supported line of operation during phase IV. Joint IO planning during this phase will need to be flexible enough to simultaneously support CMO and combat operations. As the US military and interagency information activity capacity matures and eventually slows, the JFC should assist the host- nation security forces and government information capacity to resume and expand, as necessary. As host nation information capacity improves, the JFC should be able to refocus joint IO efforts to other mission areas. Expanding host-nation capacity through military and interagency efforts will help foster success in the next phase.
  6. Phase V-Enable Civil Authority. During phase V, joint IO planning focuses on supporting the redeployment of US forces, as well as providing continued support to stability operations. IO planning during phase V should account for interagency and country team efforts to resume the lead mission for information within the host nation territory. The IO staff and cell can anticipate the possibility of long-term US commercial and government support to the former adversary’s economic and political interests to continue through the completion of this phase.

CHAPTER V

MULTINATIONAL INFORMATION OPERATIONS

Introduction

Joint doctrine for multinational operations, including command and operations in a multinational environment, is described in JP 3-16, Multinational Operations. The purpose of this chapter is to highlight specific doctrinal components of IO in a multinational environment (see Figure V-1). In doing so, this chapter will build upon those aspects of IO addressed in JP 3-16.

Other Nations and Information Operations

Multinational partners recognize a variety of information concepts and possess sophisticated doctrine, procedures, and capabilities. Given these potentially diverse perspectives regarding IO, it is essential for the multinational force commander (MNFC) to resolve potential conflicts as soon as possible. It is vital to integrate multinational partners into IO planning as early as possible to gain agreement on an integrated and achievable IO strategy.

Initial requirements for coordinating, synchronizing, and when required integrating other nations into the US IO plan include:

(1) Clarifying all multinational partner information objectives.
(2) Understanding all multinational partner employment of IRCs.
(3) Establishing IO deconfliction procedures to avoid conflicting messages. (4) Identifying multinational force (MNF) vulnerabilities as soon as possible. (5) Developing a strategy to mitigate MNF IO vulnerabilities.
(6) Identifying MNF IRCs.

Regardless of the maturity of each partner’s IO strategy, doctrine, capabilities, tactics, techniques, or procedures, every multinational partner can contribute to MNF IO by providing regional expertise to assist in planning and conducting IO. Multinational partners have developed unique approaches to IO that are tailored for specific targets in ways that may not be employed by the US. Such contributions complement US IO expertise and IRCs, potentially enhancing the quality of both the planning and execution of multinational IO.

Multinational Information Operations Considerations

Military operation planning processes, particularly for IO, whether JOPP based or based on established or agreed to multinational planning processes, include an understanding of multinational partner(s):

(1) Cultural values and institutions.
(2) Interests and concerns.
(3) Moral and ethical values.
(4) ROE and legal constraints.

(5) Challenges in multilingual planning for the employment of IRCs.

(6) IO doctrine, techniques, and procedures.

Sharing of information with multinational partners.

(1) Each nation has various IRCs to provide, in support of multinational objectives. These nations are obliged to protect information that they cannot share across the MNF. However, to plan thoroughly, all nations must be willing to share appropriate information to accomplish the assigned mission.

(2) Information sharing arrangements in formal alliances, to include US participation in United Nations missions, are worked out as part of alliance protocols. Information sharing arrangements in ad hoc multinational operations where coalitions are working together on a short-notice mission must be created during the establishment of the coalition.

(3) Using National Disclosure Policy(NDP)1, National Policy and Procedures for the Disclosure of Classified Military Information to Foreign Governments and International Organizations, and Department of Defense Instruction (DODI) O-3600.02, Information Operations (IO) Security Classification Guidance (U), as guidance, the senior US commander in a multinational operation must provide guidelines to the US-designated disclosure representative on information sharing and the release of classified information or capabilities to the MNF.

(4) Information concerning US persons may only be collected, retained, or disseminated in accordance with law and regulation. Applicable provisions include: the Privacy Act, Title 5, USC, Section 552a; DODD 5200.27, Acquisition of Information Concerning Persons and Organizations not Affiliated with the Department of Defense; Executive Order 12333, United States Intelligence Activities; and DOD 5240.1-R, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons.

  1. Planning, Integration, and Command and Control of Information Operations in Multinational Operations
  2. The role of IO in multinational operations is the prerogative of the MNFC. The mission of the MNF determines the role of IO in each specific operation.
  3. Representation of key multinational partners in the MNF IO cell allows their expertise and capabilities to be utilized, and the IO portion of the plan to be better coordinated and more timely.
  4. While some multinational partners may not have developed an IO concept or fielded IRCs, it is important that they fully appreciate the importance of the information in achieving the MNFC’s objectives. For this reason, every effort should be made to provide basic-level IO training to multinational partners serving on the MNF IO staff.
  5. MNF headquarters staff could be organized differently; however, as a general rule, an information operations coordination board (IOCB) or similar organization may exist (see Figure V-2).

A wide range of MNF headquarters staff organizations should participate in IOCB deliberations to ensure their input and subject matter expertise can be applied to satisfy a requirement in order to achieve MNFC’s objectives.

  1. Besides the coordination activities highlighted above, the IOCB should also participate in appropriate joint operations planning groups (JOPGs) and should take part in early discussions, including mission analysis. An IO presence on the JOPG is essential, as it is the IOCB which provides input to the overall estimate process in close coordination with other members of the MNF headquarters staff.
  2. Multinational Organization for Information Operations Planning
  3. When the JFC is also the MNFC, the joint force staff should be augmented by planners and subject matter experts from the MNF. MNF IO planners and IRC specialists should be trained on US and MNF doctrine, requirements, resources, and how the MNF is structured to integrate IRCs.
  4. Multinational Policy Coordination

The development of capabilities, tactics, techniques, procedures, plans, intelligence, and communications support applicable to IO requires coordination with the responsible DOD components and multinational partners. Coordination with partner nations above the JFC/MNFC level is normally effected within existing defense arrangements, including bilateral arrangements.

CHAPTER VI

INFORMATION OPERATIONS ASSESSMENT

 

“Not everything that can be counted, counts, and not everything that counts can be counted.”

Dr. William Cameron, Informal Sociology: A Casual Introduction to Sociological Thinking, 1963

  1. Introduction
  2. This chapter provides a framework to organize, develop, and execute assessment of IO, as conducted within the information environment. The term “assessment” has been used to describe everything from analysis (e.g., assessment of the enemy) to an estimate of the situation (pre-engagement assessment of blue and red forces).

Assessment considerations should be thoroughly integrated into IO planning.

  1. Assessment of IO is a key component of the commander’s decision cycle, helping to determine the results of tactical actions in the context of overall mission objectives and providing potential recommendations for refinement of future plans. The decision to adapt plans or shift resources is based upon the integration of intelligence in the operational environment and other staff estimates, as well as input from other mission partners, in pursuit of the desired end state.
  2. Assessments also provide opportunities to identify IRC shortfalls, changes in parameters and/or conditions in the information environment, which may cause unintended effects in the employment of IRCs, and resource issues that may be impeding joint IO effectiveness.
  3. Understanding Information Operations Assessment
  4. Assessment consists of activities associated with tasks, events, or programs in support of the commander’s desired end state. IO assessment is iterative, continuously repeating rounds of analysis within the operations cycle in order to measure the progress of IRCs toward achieving objectives. The assessment process begins with the earliest stages of the planning process and continues throughout the operation or campaign and may extend beyond the end of the operation to capture long-term effects of the IO effort.
  5. Analysis of the information environment should begin before operations start, in order to establish baselines from which to measure change. During operations, data is continuously collected, recharacterizing our understanding of the information environment and providing the ability to measure changes and determine whether desired effects are being created.
  6. Purpose of Assessment in Information Operations

Assessments help commanders better understand current conditions. The commander uses assessments to determine how the operation is progressing and whether the operation is creating the desired effects. Assessing the effectiveness of IO activities challenges both the staff and commander. There are numerous venues for informing and receiving information from the commander; they provide opportunities to identify IRC shortfalls and resource issues that may be impeding joint IO effectiveness.

  1. Impact of the Information Environment on Assessment
  2. Operation assessments in IO differ from assessments of other operations because the success of the operation mainly relies on nonlethal capabilities, often including reliance on measuring the cognitive dimension, or on nonmilitary factors outside the direct control of the JFC. This situation requires an assessment with a focused, organized approach that is developed in conjunction with the initial planning effort. It also requires a clear vision of the end state, an understanding of the commander’s objectives, and an articulated statement of the ways in which the planned activities achieve objectives.
  3. The information environment is a complex entity, an “open system” affected by variables that are not constrained by geography. The mingling of people, information, capabilities, organizations, religions, and cultures that exist inside and outside a commander’s operational area are examples of these variables. These variables can give commanders and their staffs the appreciation that the information environment is turbulent―constantly in motion and changing―which may make analysis seem like a daunting task, and make identifying an IRC (or IRCs) most likely to create a desired effect, feel nearly impossible. In a complex environment, seemingly minor events can produce enormous outcomes, far greater in effect than the initiating event, including secondary and tertiary effects that are difficult to anticipate and understand. This complexity is why assessment is required and why there may be specific capabilities required to conduct assessment and subsequent analysis.
  4. A detailed study and analysis of the information environment affords the planner the ability to identify which forces impact the information environment and find order in the apparent chaos. Often the complexity of the information environment relative to a specific operational area requires assets and capabilities that exceed the organic capability of the command, making the required exhaustive study an impossible task. The gaps in capability and information are identified by planners and are transformed into information requirements and requests, request for forces and/or augmentation, and requests for support from external agencies.

Examples of capabilities, forces, augmentation, and external support include specialized software, behavioral scientists, polling, social-science studies, operational research specialists, statisticians, demographic data held by commercial industry, reachback support to other mission partners, military information support personnel, access to external DOD databases, and support from academia.

But the presence of sensitive variables can be a catalyst for exponential changes in outcomes, as in the aforementioned secondary and tertiary effects. Joint IO planners should be cautious about making direct causal statements, since many nonlinear feedback loops can render direct causal statements inaccurate. Incorrect assumptions about causality in a complex system can have disastrous effects on the planning of future operations and open the assessment to potential discredit, because counterexamples may exist.

  1. The Information Operations Assessment Process
  2. Integrating the employment of IRCs with other lines of operation is a unique requirement for joint staffs and is a discipline that is comparatively new.

The broad range of information-related activities occurring across the three dimensions of the information environment (physical, informational, and cognitive) demand a specific, validated, and formal assessment process to determine whether these actions are contributing towards the fulfillment of an objective.

With the additional factor that some actions result in immediate effect and others may take years or generations to fully create, the assessment process must be able to report incremental effects in each dimension. In particular, when assessing the effect of an action or series of actions on behavior, the effects may need to be measured in terms such as cognitive, affective, and action or behavioral. Put another way, we may need to assess how a group thinks, feels, and acts, and whether those behaviors are a result of our deliberate actions intended to produce that effect, an unintended consequence of our actions, a result of another’s action or activity, or a combination of all of these.

  1. Step 1—Analyze the Information Environment

(1) As the entire staff conducts analysis of the operational environment, the IO staff focuses on the information environment. This analysis occurs when planning for an operation begins or, in some cases, prior to planning for an operation, e.g., during routine analysis in support of theater security cooperation plan activities.

It is a required step for viable planning and provides necessary data for, among other things, development of MOEs, determining potential target audiences and targets, baseline data from which change can be measured. Analysis is conducted by interdisciplinary teams and staff sections. The primary product of this step is a description of the information environment. This description should include categorization or delineation of the physical, informational, and cognitive dimensions.

(2) Analysis of the information environment identifies key functions and systems within the operational environment. The analysis provides the initial information to identify decision makers (cognitive), factors that guide the decision-making process (informational), and infrastructure that supports and communicates decisions and decision making (physical).

(3) Gaps in the ability to analyze the information environment and gaps in required information are identified and transformed into information requirements and requests, requests for forces and/or augmentation, and requests for support from external agencies. The information environment is fluid. Technological, cultural, and infrastructure changes, regardless of their source or cause, can all impact each dimension of the information environment. Once the initial analysis is complete, periodic analyses must be conducted to capture changes and update the analysis for the commander, staff, other units, and unified action partners.

Much like a running estimate, the analysis of the information environment becomes a living document, continuously updated to provide a current, accurate picture.

  1. Step 2—Integrate Information Operations Assessment into Plans and Develop the Assessment Plan

(1) Early integration of assessments into plans is paramount, especially in the information environment. One of the first things that must happen during planning is to ensure that the objectives to be assessed are clear, understandable, and measureable. Equally important is to consider as part of the assessment baseline, a control set of conditions within the information environment from which to assess the performance of the tasks assigned to any given IRC, in order to determine their potential impact on IO.

Planners should also be aware that while each staff section participates in the planning process, quite often portions of individual staff sections are simultaneously working on the steps of the planning process in greater depth and detail, not quite keeping pace with the entire staff effort as they work on subordinate and supporting staff tasks.

(2) In order to achieve the objectives, specific effects need to be identified. It is during COA development, Step 3 of JOPP, that specific tasks are determined that will create the desired effects, based on the commander’s objectives. Effects should be clearly distinguishable from the objective they support as a condition for success or progress and not be misidentified as another objective. These effects ultimately support tasks to influence, disrupt, corrupt, or usurp the decision making of our adversaries, or to protect our own. Effects should provide a clear and common description of the desired change in the information environment.

UNDERSTANDING TASK AND OBJECTIVE, CAUSE AND EFFECT INTERRELATIONSHIPS

Understanding the interrelationships of the tasks and objectives, and the desired cause and effect, can be challenging for the planner. Mapping the expected change (a theory of change) provides the clear, logical connections between activities and desired outcomes by defining intermediate steps between current situation and desired outcome and establishing points of measurement. It should include clearly stated assumptions that can be challenged for correctness as activities are executed. The ability to challenge assumptions in light of executed activities allows the joint information operations planner to identify flawed connections between activity and outcome, incorrect assumptions, or the presence of spoilers. For example:

Training and arming local security guards increases their ability and willingness to resist insurgents, which will increase security in the locale. Increased security will lead to increased perceptions of security, which will promote participation in local government, which will lead to better governance. Improved security and better governance will lead to increased stability.

Logical connection between activities and outcomes

  • −  Activity: training and arming local security guards
  • −  Outcome: increased ability to resist insurgents
  • Clearly stated assumptions

−  Increased ability and willingness to resist increases security in the locale −  Increased security leads to increased perceptions of security

  • Intermediate steps and points of measurement
  • −  Measures of performance regarding training activities
  • −  Measures of effectiveness (MOEs) regarding willingness to resist
  • −  MOEs regarding increased local security

 

(3) This expected change shows a logical connection between activities (training and arming locals) and desired outcomes (increased stability). It makes some assumptions, but those assumptions are clearly stated, so they can be challenged if they are believed to be incorrect.

Further, those activities and assumptions suggest obvious things to measure, such as performance of the activities (the training and arming) and the outcome (change in stability). They also suggest measurement of more subtle elements of all the intermediate logical nodes such as capability and willingness of local security forces, change in security, change in perception of security, change in participation in local government, change in governance, and so on. Better still, if one of those measurements does not yield the desired result, the joint IO planner will be able to ascertain where in the chain the logic is breaking down (which hypotheses are not substantiated). They can then modify the expected change and the activities supporting it, reconnecting the logical pathway and continuing to push toward the objectives.

(4) Such an expected change might have begun as something quite simple: training and arming local security guards will lead to increased stability. While this gets at the kernel of the idea, it is not particularly helpful for building assessments. Stopping there would suggest only the need to measure the activity and the outcome. However, it leaves a huge assumptive gap. If training and arming security guards goes well, but stability does not increase, there will be no apparent reason why. To begin to expand on a simple expected change, the joint IO planner should ask the question, “Why? How might A lead to B?” (In this case, how would training and arming security guards lead to stability?) A thoughtful answer to this question usually leads to recognition of another node to the expected change. If needed, the question can be asked again relative to this new node, until the expected change is sufficiently articulated.

(5) Circumstances on the ground might also require the assumptions in an expected change to be more explicitly defined. For example, using the expected change articulated in the above example, the joint IO planner might observe that in successfully training and arming local security guards, they are better able to resist insurgents, leading to an increased perception of security, as reported in local polls. However, participation in local government, as measured through voting in local elections and attendance at local council meetings, has not increased. The existing expected change and associated measurements illustrate where the chain of logic is breaking down (somewhere between perceptions of security and participation in local governance), but it does not (yet) tell why that break is occurring. Adjusting the expected change by identifying the incorrect assumption or spoiling factor preventing the successful connection between security and local governance will also help improve achievement of the objective.

  1. Step 3—Develop Information Operations Assessment Information Requirements and Collection Plans

(1) Critical to this step is ensuring that attributes are chosen that are relevant and applicable during the planning processes, as these will drive the determination of measures that display behavioral characteristics, attitudes, perceptions, and motivations that can be examined externally. Measures are categorized as follows:

(a) Qualitative—a categorical measurement expressed by means of a natural language description rather than in terms of numbers. Methodologies consist of focus groups, in-depth interviews, ethnography, media content analysis, after-action reports, and anecdotes (individual responses sampled consistently over time).

(b) Quantitative—a numerical measurement expressed in terms of numbers rather than means of a natural language description. Methodologies consist of surveys, polls, observational data (intelligence, surveillance, and reconnaissance), media analytics, and official statistics.

(2) An integrated collection management plan ensures that assessment data gathered at the tactical level is incorporated into operational planning. This collection management plan needs to satisfy information requirements with the assigned tactical, theater, and national intelligence sources and other collection resources. Just as crucial is realizing that not every information requirement will be answered by the intelligence community and therefore planners must consider collaborating with other sources of information. Planners must discuss collection from other sources of information with the collection manager and unit legal personnel to ensure that the information is included in the overall assessment and the process is in accordance with intelligence oversight regulations and policy.

(3) Including considerations for assessment collection in the plan will facilitate the return of data needed to accomplish the assessment. Incorporating the assessment plan with the directions to conduct an activity will help ensure that resource requirements for assessment are acknowledged when the plan is approved. The assessment plan should, at a minimum, include timing and frequency of data collection, identify the party to conduct the collection, and provide reporting instructions.

(4) A well-designed assessment plan will:

(a) Develop the commander’s assessment questions.

(b) Document the expected change.

(c) Document the development of information requirements needed specifically for IO.

(d) Define key terms embedded within the end state with regard to the actors or TAs, operational activities, effects, acceptable conditions, rates of change, thresholds of success/failure, and technical/tactical triggers.

(e) Verify tactical objectives—support operational objectives.

(f) Identify strategic and operational considerations—in addition to tactical considerations, linking assessments to lines of operation and the associated desired conditions.

(g) Identify key nodes and connections in the expected change to be measured.

(h) Document collection and analysis methods.
(i) Establish a method to evaluate triggers to the commander’s decision points.

(j) Establish methods to determine progress towards the desired end state.

(k) Establish methods to estimate risk to the mission.

(l) Develop recommendations for plan adjustments.

(m) Establish the format for reporting assessment results.

  1. Step 4—Build/Modify Information Operations Assessment Baseline. A subset of JIPOE, the baseline is part of the overall characterization of the information environment that was accomplished in Step 1. It serves as a reference point for comparison, enabling an assessment of the way in which activities create desired effects. The baseline allows the commander and staff to set goals for desired rates of change within the information environment and establish thresholds for success and failure.
  2. Step 5—Coordinate and Execute Information Operations and Coordinate Intelligence Collection Activities

(1) With information gained in steps 1 and 4, the joint IO planner should be able to build an understanding of the TA. This awareness will yield a collection plan that enables the joint IO planner to determine whether or not the TA is “seeing” the activities/actions presented. The collection method must perceive the TA reaction. IO planners, assessors, and intelligence planners need to be able to communicate effectively to accurately capture the required intelligence needed to perform IO assessments.

(2) Information requirements and subsequent indicator collection must be tightly managed during employment of IRCs in order to validate execution and to monitor TA response. In the information environment, coordination and timing are crucial because some IRCs are time sensitive and require immediate indicator monitoring to develop valid assessment data.

  1. Step 6—Monitor and Collect Information Environment Data for Information Operations Assessment

(1) Monitoring is the continuous process of observing conditions relevant to current operations. Assessment data are collected, aggregated, consolidated and validated. Gaps in the assessment data are identified and highlighted in order to determine actions needed to alleviate shortfalls or make adjustments to the plan. As information and intelligence are collected during execution, assessments are used to validate or negate assumptions that define cause (action) and effect (conclusion) relationships between operational activities, objectives, and end states.

(2) If anticipated progress toward an end state does not occur, then the staff may conclude that the intended action does not have the intended effect. The uncertainty in the information environment makes the use of critical assumptions particularly important, as operation planning may need to be adjusted for elements that may not have initially been well understood when the plan was developed.

  1. Step 7—Analyze Information Operations Assessment Data

(1) If available, personnel trained or qualified in analysis techniques should conduct data analysis. Analysis can be done outside the operational area by leveraging reachback capabilities. One of the more important factors for analysis is that it is conducted in an unbiased manner. This is more easily accomplished if the personnel conducting analysis are not the same personnel who developed the execution plan. Assessment data are analyzed and the results are compared to the baseline measurements and updated continuously as the staff continues its analysis of the information environment.

(2) Deficiency analysis must also occur in this step. If no changes were observed in the information environment, then a breakdown may have occurred somewhere. The plan might be flawed, execution might not have been successful, collection may not have been accomplished as prescribed, or more time may be needed to observe any changes.

  1. Step 8—Report Assessment Results and Make Recommendations

As expressed earlier in this chapter, assessment results enable staffs to ensure that tasks stay linked to objectives and objectives remain relevant and linked to desired end states. They provide opportunities to identify IRC shortfalls and resource issues that may be impeding joint IO effectiveness. These results may also provide information to agencies outside of the command or chain of command.

The primary purpose of reporting the results is to inform the command and staff concerning the progress of objective achievement and the effects on the information environment, and to enable decision making. The published assessment plan, staff standard operating procedures, battle rhythm, and orders are documents in which commanders can dictate how often assessment results are provided and the format in which they are reported. I

  1. Barriers to Information Operations Assessment
  2. The preceding IO assessment methodology can support all operations, and most barriers to assessment can be overcome simply by considering assessment requirements as the plan is developed. But whatever the phase type of operation, the biggest barriers to assessment are generally self-generated.
  3. Some of the self-generated barriers to assessment include the failure to establish objectives that are actually measurable, the failure to collect baseline data against which “post-test” data can be compared, and the failure to plan adequately for the collection of assessment data, including the use of intelligence assets.
  4. There are other factors that complicate IO assessment. Foremost, it may be difficult or impossible to directly relate behavior change to an individual act or group of actions. Also, the logistics of data capture are not simple. Contingencies and operations in uncertain or hostile environments present unique challenges in terms of operational tempo and access to conduct assessments.
  5. Organizing for Operation Assessments
  6. Integrating assessment into the planning effort is normally the responsibility of the lead planner, with assistance across the staff. The lead planner understands the complexity of the plan and decision points established as the plan develops. The lead planner also understands potential indicators of success or failure.
  7. As a plan becomes operationalized, the overall assessment responsibility typically transitions from the lead planner to the J-3.
  8. When appropriate, the commander can establish an assessments cell or team to manage assessments activities. When utilized, this cell or team must have appropriate access to operational information, appropriate access to the planning process, and the representation of other staff elements, to include IRCs.
  9. Measures and Indicators
  10. As emphasized in Chapter IV, “Integrating Information-Related Capabilities into the Joint Operation Planning Process,” paragraph 2.f., “Relationship Between Measures of

Performance (MOPs) and Measures of Effectiveness (MOEs),” MOPs and MOEs help accomplish the assessment process by qualifying or quantifying the intangible attributes of the information environment. This is done to assess the effectiveness of activities conducted in the information environment and to establish a direct cause between the activity and the effect desired.

  1. MOPs should be developed during the operation planning process, should be tied directly to operation planning, and at a minimum, assess completion of the various phases of an activity or program.

Further, MOPs should assess any action, activity, or operation at which IO actions or activities interact with the TA. For certain tasks there are TA capabilities (voice, text, video, or face-to-face). For instance, during a leaflet-drop, the point of dissemination of the leaflets would be an action or activity. The MOP for any one action should be whether or not the TA was exposed to the IO action or activity.

(1) For each activity phase, task, or touch point, a set of MOPs based on the operational plan outlined in the program description should be developed. Task MOPs are measured via internal reporting within units and commands. Touch-point MOPs can be measured in one of several ways. Whether or not a TA is aware of, interested in, or responding to, an IRC product or activity, can be directly ascertained by conducting a survey or interview. This information can also be gathered by direct observational methods such as field reconnaissance, surveillance, or intelligence collection. Information can also be gathered via indirect observations such as media reports, online activity, or atmospherics.

(2) The end state of operation planning is a multi-phased plan or order, from which planners can directly derive a list of MOPs, assuming a higher echelon has not already designated the MOPs.

  1. MOEs need to be specific, clear, and observable to provide the commander effective feedback. In addition, there needs to be a direct link between the objectives, effects, and the TA. Most of the IRCs have their own doctrine and discuss MOEs with slightly different language, but with ultimately the same functions and roles.

(1) In line with JP 5-0, Joint Operation Planning, development of MOEs and their associated impact indicators (derived from measurable supporting objectives) must be done during the planning process.

(2) In developing IO MOEs, the following general guidelines should be considered. First, they should be related to the end state; that is, they should directly relate to the desired effects. They should also be measurable quantitatively or qualitatively. In order to measure effectiveness, a baseline measurement must exist or be established prior to execution, against which to measure system changes. They should be within a defined periodical or conditional assessment framework (i.e., the required feedback time, cyclical period, or conditions should be clearly stated for each MOE and a deadline made to report within a specified assessment period, which clearly delineates the beginning, progression, and termination of a cycle in which the effectiveness of the operations is to be assessed). Finally, they need to be properly resourced. The collection, collation, analysis and reporting of MOE data requires personnel, budgetary, and materiel resources. IO staffs, along with their counterparts at the component level, should ensure that these resource requirements are built into the plan during its development.

(3) The more specific the MOE, the more readily the intelligence collection manager can determine how best to collect against the requirements and provide valuable feedback pertaining to them. The ability to establish MOEs and conduct combat assessment for IO requires observation and collection of information from diverse, nebulous and often untimely sources. These sources may include: human intelligence; signals intelligence; air and ground-based intelligence; surveillance and reconnaissance; open-source intelligence, including the Internet; contact with the public; press inquiries and comments; Department of State polls; reports and surveys; nongovernmental organizations; international organizations; and commercial polls.

(4) One of the biggest challenges with MOE development is the difficulty of defining variables and establishing causality. Therefore, it is more advisable to approach this from a correlational, versus a causality perspective, where unrealistic “zero-defect” predictability gives way to more attainable correlational analysis, which provides insights to the likelihood of particular events and effects given a certain criteria in terms of conditions and actors in the information environment.

evidence seems to point out that correlation of indicators and events have proven more accurate than the evidence to support cause and effects relationships, particularly when it comes to behavior and intangible parameters of the cognitive elements of the information environment. IRCs, however, are directed at TAs and decision makers, and the systems that support them, making it much more difficult to establish concrete causal relationships, especially when assessing foreign public opinion or human behavior. Unforeseen factors can lead to erroneous interpretations, for example, a traffic accident in a foreign country involving a US service member or a local civilian’s bias against US policies can cause a decline in public support, irrespective of otherwise successful IO.

(5) If IO effects and supporting IO tasks are not linked to the commander’s objectives, or are not clearly written, measuring their effectiveness is difficult. Clearly written IO tasks must be linked to the commander’s objectives to justify resources to measure their contributing effects. If MOEs are difficult to write for a specific IO effect, the effect should be reevaluated and a rewrite considered. When attempting to describe desired effects, it is important to keep the effect impact in mind, as a guide to what must be observed, collected, and measured. In order to effectively identify the assessment methodology and to be able to recreate the process as part of the scientific method, MOE development must be written with a documented pathway for effect creation.

MOEs should be observable, to aid with collection; quantifiable, to increase objectivity; precise, to ensure accuracy; and correlated with the progress of the operation, to attain timeliness.

  1. Indicators are crucial because they aid the joint IO planner in informing MOEs and should be identifiable across the center of gravity critical factors. They can be independently weighted for their contribution to a MOE and should be based on separate criteria. A single indicator can inform multiple MOEs. Dozens of indicators will be required for a large-scale operation.
  2. Considerations
  3. In the information environment, it is unlikely that universal measures and indicators will exist because of varying perspectives. In addition, any data collected is likely to be incomplete. Assessments need to be periodically adjusted to the changing situation in order to avoid becoming obsolete.

In addition, assessments will usually need to be supplemented by subjective constructs that are a reflection of the joint IO planner’s scope and perspective (e.g., intuition, anecdotal evidence, or limited set of evidence).

  1. Assessment teams may not have direct access to a TA for a variety of reasons. The goal of measurement is not to achieve perfect accuracy or precision—given the ever present biases of theory and the limitations of tools that exist—but rather, to reduce uncertainty about the value being measured. Measurements of IO effects on TA can be accomplished in two ways: direct observation and indirect observation. Direct observation measures the attitudes or behaviors of the TA either by questioning the TA or observing behavior firsthand. Indirect observation measures otherwise inaccessible attitudes and behaviors by the effects that they have on more easily measurable phenomena. Direct observations are preferable for establishing baselines and measuring effectiveness, while indirect observations reduce uncertainty in measurements, to a lesser degree.
  2. Categories of Assessment
  3. Operation assessment of IO is an evaluation of the effectiveness of operational activities conducted in the information environment. Operation assessments primarily document mission success or failure for the commander and staff. However, operation assessments inform other types of assessment, such as programmatic and budgetary assessment. Programmatic assessment evaluates readiness and training, while budgetary assessment evaluates return on investment.
  4. When categorized by the levels of warfare, there exists tactical, operational and strategic-level assessment. Tactical-level assessment evaluates the effectiveness of a specific, localized activity. Operational-level assessment evaluates progress towards accomplishment of a plan or campaign. Strategic level assessment evaluates progress towards accomplishment of a theater or national objective. The skilled IO planner will link tactical actions to operational and strategic objectives.

APPENDIX A

REFERENCES

 

The development of JP 3-13 is based on the following primary references.

 General

National Security Strategy.

Unified Command Plan.

Executive Order 12333, United States Intelligence Activities.

The Fourth Amendment to the US Constitution.

The Privacy Act, Title 5, USC, Section 552a.

The Wiretap Act and the Pen/Trap Statute, Title 18, USC, Sections 2510-2522 and 3121-3127.

The Stored Communications Act, Title 18, USC, Sections 2701-2712.

The Foreign Intelligence Surveillance Act, Title 50, USC.

 Department of State Publications

Department of State Publication 9434, Treaties In Force.

Department of Defense Publications

Secretary of Defense Memorandum dated 25 January2011, Strategic Communication and Information Operations in the DOD.

National Military Strategy. DODD S-3321.1, Overt Psychological Operations Conducted by the Military Services in Peacetime and in Contingencies Short of Declared War.

DODD 3600.01, Information Operations (IO).

DODD5200.27, Acquisition of Information Concerning Persons and Organizations not Affiliated with the Department of Defense.

DOD 5240.1-R, Procedures Governing the Activities of DOD Intelligence Components that Affect United States Persons.

DODI O-3600.02, Information Operation (IO) Security Classification Guidance.

 Chairman of the Joint Chiefs of Staff Publications

CJCSI 1800.01D, Officer Professional Military Education Policy (OPMEP).
CJCSI 3141.01E, Management and Review of Joint Strategic Capabilities Plan (JSCP)-Tasked Plans.

CJCSI 3150.25E, Joint Lessons Learned Program.

CJCSI 3210.01B, Joint Information Operations Policy.

Chairman of the Joint Chiefs of Staff Manual (CJCSM) 3122.01 A, Joint Operation Planning and Execution System (JOPES) Volume I, Planning Policies and Procedures.

CJCSM 3122.02D, Joint Operation Planning and Execution System (JOPES)Volume III, Time-Phased Force and Deployment Data Development and Deployment Execution.

CJCSM 3122.03C, Joint Operation Planning and Execution System (JOPES)Volume II, Planning Formats.

CJCSM 3500.03C, Joint Training Manual for the Armed Forces of the United States. i. CJCSM 3500.04F, Universal Joint Task Manual.
j. JP 1, Doctrine for the Armed Forces of the United States.
k. JP 1-02, Department of Defense Dictionary of Military and Associated Terms.

JP 1-04, Legal Support to Military Operations.
m. JP 2-0, Joint Intelligence.
n. JP 2-01, Joint and National Intelligence Support to Military Operations.

JP 2-01.3, Joint Intelligence Preparation of the Operational Environment.

JP 2-03, Geospatial Intelligence Support to Joint Operations.
JP 3-0, Joint Operations.
JP 3-08, Interorganizational Coordination During Joint Operations.
JP 3-10, Joint Security Operations in Theater.
JP 3-12, Cyberspace Operations.
JP 3-13.1, Electronic Warfare.
JP 3-13.2, Military Information Support Operations.

JP 3-13.3, Operations Security.
JP 3-13.4, Military Deception.
JP 3-14, Space Operations.
JP 3-16, Multinational Operations.

JP 3-57, Civil-Military Operations.

JP 3-60, Joint Targeting.

JP 3-61, Public Affairs.
JP 5-0, Joint Operation Planning.
JP 6-01, Joint Electromagnetic Spectrum Management Operations. 

Multinational Publication

AJP 3-10, Allied Joint Doctrine for Information Operations.

Notes on Countering Threat Networks

Accession Number: AD1025082

Title: Countering Threat Networks

Descriptive Note: Technical Report

Corporate Author: JOINT STAFF WASHINGTON DC WASHINGTON

Abstract: 

This publication has been prepared under the direction of the Chairman of the Joint Chiefs of Staff CJCS. It sets forth joint doctrine to govern the activities and performance of the Armed Forces of the United States in joint operations, and it provides considerations for military interaction with governmental and nongovernmental agencies, multinational forces, and other interorganizational partners. It provides military guidance for the exercise of authority by combatant commanders and other joint force commanders JFCs, and prescribes joint doctrine for operations and training. It provides military guidance for use by the Armed Forces in preparing and executing their plans and orders. It is not the intent of this publication to restrict the authority of the JFC from organizing the force and executing the mission in a manner the JFC deems most appropriate to ensure unity of effort in the accomplishment of objectives. The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks CTN consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat networks motivation and objectives is required to effectively counter its efforts.

 

Descriptors: Threats, military organizationsintelligence collection

 

Distribution Statement: APPROVED FOR PUBLIC RELEASE

 

Link to Article: https://apps.dtic.mil/sti/citations/AD1025082

 

Notes

Scope

This publication provides joint doctrine for joint force commanders and their staffs to plan, execute, and assess operations to identify, neutralize, disrupt, or destroy threat networks.

Introduction

The worldwide emergence of adaptive threat networks introduces a wide array of challenges to joint forces in all phases of operations. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Countering threat networks (CTN) consists of activities to pressure threat networks or mitigate their adverse effects. Understanding a threat network’s motivation and objectives is required to effectively counter its efforts.

Policy and Strategy

CTN planning and operations require extensive coordination as well as innovative, cross-cutting approaches that utilize all instruments of national power. The national military strategy describes the need of the joint force to operate in this complex environment.

Challenges of the Strategic Security Environment

CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply.

The Strategic Approach

The groundwork for successful countering threat networks activities starts with information and intelligence to develop an understanding of the operational environment and the threat network.

Military engagement, security cooperation, and deterrence are just some of the activities that may be necessary to successfully counter threat networks without deployment of a joint task force.

Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants.

Threat Network Fundamentals

Threat Network Construct

A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or operational environment (OE) that can be targeted for action. A link is a behavioral, physical, or functional relationship between nodes. Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying centers of gravity (COGs), networks, and cells the joint force commander (JFC) may wish to influence or change during an operation.

Network Analysis

Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes. The political, military, economic, social, information, and infrastructure systems perspective is a useful starting point for analysis of threat networks. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

Determining and Analyzing Node-Link Relationships

Social network analysis provides a method that helps the JFC and staff understand the relevance of nodes and links. The strength or intensity of a single link can be relevant to determining the importance of the functional relationship between nodes and the overall significance to the larger system. The number and strength of nodal links within a set of nodes can be indicators of key nodes and a potential COG.

Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals. Examples of cells include: operational, logistical, training, communications, financial, and WMD proliferation cells.

Networked Threats and Their Impact on the Operational Environment

Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

Threat Network Characteristics

Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations.

Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Networks possess many characteristics important to their success and survival, such as flexible command and control structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt.

Network Engagement

Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.

Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups. These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose.

Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. To successfully accomplish mission goals the JFC should equally consider the impact of actions on multinational and friendly forces, local population, criminal enterprises, as well as the adversary.

Identify a Threat Network

Threat networks often attempt to remain hidden. By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. A thorough joint intelligence preparation of the operational environment (JIPOE) product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

Planning to Counter Threat Networks

Joint Intelligence Preparation of the Operational Environment and Threat Networks

JIPOE is the first step in identifying the essential elements that constitute the OE and is used to plan and conduct operations against threat networks. The focus of the JIPOE analysis for threat networks is to help characterize aspects of the networks.

Understanding the Threat’s Network

To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate critical factors analysis (CFA). CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning.

Targeting Evaluation Criteria

A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis. The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors: network affiliations, criticality, accessibility, recuperability, vulnerability, effect, and recognizability.

Countering Threat Networks Through the Planning of Phases

JFCs may plan and conduct CTN activities throughout all phases of a given operation. Upon gaining an understanding of the various threat networks in the OE through the joint planning process (JPP), JFCs and their staffs develop a series of prudent (feasible, suitable, and acceptable) CTN actions to be executed in conjunction with other phased activities.

Activities to Counter Threat Networks

Targeting Threat Networks

JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, critical capabilities (network’s functions), and critical requirements for the network. Joint force targeting efforts should employ a comprehensive approach, leveraging military force and civil agency capabilities that keep continuous pressure on multiple nodes and links of the network’s structure.

Desired Effects on Networks

When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The selection of effects desired on a network is conducted as part of target selection, which includes the consideration of capabilities to employ that was identified during capability analysis of the joint targeting cycle.

Targeting

CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

Lines of Effort by Phase

During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network. However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE.

Theater Concerns in Countering Threat Networks

Many threat networks are transnational, recruiting, financing, and operating on a global basis. Theater commanders need to be aware of the relationships among these networks and identify the basis for their particular connection to a geographic combatant commander’s area of responsibility.

Operational Approaches to Countering Threat Networks

There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple line of operations and lines of effort may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

Assessments

Assessment of Operations to Counter Threat Networks

CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for human intelligence.

Operation Assessment

CTN activities may require assessing multiple measures of effectiveness (MOEs) and measures of performance (MOPs), depending on threat network activity. The assessment process provides a feedback mechanism to the JFC to provide guidance and direction for future operations and targeting efforts against threat networks.

Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate. In conducting each of these activities, assessors must be linked to JPP, understand the operation plan, and inform the intelligence process as to what information is required to support indicators, MOEs, and MOPs. In assessing CTN operations, quantitative data and analysis will inform assessors.

CHAPTER I

OVERVIEW

“The emergence of amorphous, adaptable, and networked threats has far-reaching implications for the US national security community. These threats affect DOD [Department of Defense] priorities and war fighting strategies, driving greater integration with other departments and agencies performing national security missions, and create the need for new organizational concepts and decision- making paradigms. The impacts are likely to influence defense planning for years to come.”

Department of Defense Counternarcotics and Global Threats Strategy, April 2011

Threat networks are those whose size, scope, or capabilities threaten US interests. These networks may include the underlying informational, economic, logistical, and political components to enable these networks to function. These threats create a high level of uncertainty and ambiguity in terms of intent, organization, linkages, size, scope, and capabilities. These threat networks jeopardize the stability and sovereignty of nation-states, including the US.

They tend to operate among civilian populations and in the seams of society and may have components that are recognized locally as legitimate parts of society. Collecting information and intelligence on these networks, their nodes, links, and affiliations is challenging, and analysis of their strengths, weaknesses, and centers of gravity (COGs) differs greatly from traditional nation- state adversaries.

  1. Threat networks are part of the operational environment (OE). These networks utilize existing networks and may create new networks that seek to move money, people, information, and goods for the benefit of the network.

Not all of these interactions create instability and not all networks are a threat to the joint force and its mission. While some societies may accept a certain degree of corruption and criminal behavior as normal, it is never acceptable for these elements to develop networks that begin to pose a threat to national and regional stability. When a network begins to pose a threat, action should be considered to counter the threat.

This doctrine will focus on those networks that do present a threat with an understanding that friendly, neutral, and threat networks overlap and share nodes and links. Threat networks vary widely in motivation, structure, activities, operational areas, and composition. Threat networks may be adversarial to a joint force or may simply be criminally motivated, increasing instability in a given operational area. Some politically or ideologically based networks may avoid open confrontation with US forces; nevertheless, these networks may threaten mission success. Their activities may include spreading ideology, moving money, moving supplies (including weapons and fighters), human trafficking, drug smuggling, information relay, or acts of terrorism toward the population or local governments. Threat networks may be local, regional, or international and a threat to deployed joint forces and the US homeland.

  1. Understandingathreatnetwork’smotivationandobjectivesisrequiredtoeffectively counter its efforts. The issues that drive a network and its ideology should be clearly understood. For example, they may be driven by grievances, utopian ideals, power, revenge over perceived past wrongs, greed, or a combination of these.
  2. CTN is one of three pillars of network engagement that includes partnering with friendly networks and engaging with neutral networks in order to attain the commander’s desired military end state within a complex OE. It consists of activities to pressure threat networks or mitigate their adverse effects. These activities normally occur continuously and simultaneously at multiple levels (tactical, operational, and strategic) and may employ lethal and/or nonlethal capabilities in a direct or indirect manner. The most effective operations pressure and influence elements of these networks at multiple fronts and target multiple nodes and links.

The networks found in the OE may be simple or complex and must be identified and thoroughly analyzed. Neither all threats nor all elements of their supporting networks can be defeated, particularly if they have a regional or global presence. Certain elements of the network can be deterred, other parts neutralized, and some portions defeated. Engaging these threats through their supporting networks is not an adjunct or ad hoc set of operations and may be the primary mission of the joint force. It is not a stand-alone operation planned and conducted separately from other military operations. CTN should be fully integrated into the joint operational design, joint intelligence preparation of the operational environment (JIPOE), joint planning process (JPP), operational execution, joint targeting process, and joint assessments.

  1. Threat networks are often the most complex adversaries that exist within the OEs and frequently employ asymmetric methods to achieve their objectives. Disrupting their global reach and ability to influence events far outside of a specific operational area requires unity of effort across combatant commands (CCMDs) and all instruments of national power.

Joint staffs must realize that effectively targeting threat networks must be done in a comprehensive manner. This is accomplished by leveraging the full spectrum of capabilities available within the joint force commander’s (JFC’s) organization, from intergovernmental agencies, and/or from partner nations (PNs).

  1. Policy and Strategy
  2. DOD strategic guidance recognizes the increasing interconnectedness of the international order and the corresponding complexity of the strategic security environment.

Threat networks and their linkages transcend geographic and functional CCMD boundaries.

  1. CCDRs must be able to employ a joint force to work with interagency and interorganizational security partners in the operational area to shape, deter, and disrupt threat networks. They may employ a joint force with PNs to neutralize and defeat threat networks.
  2. CCDRs develop their strategies by analyzing all aspects of the OE and developing options to set conditions to attain strategic end states. They translate these options into an integrated set of CCMD campaign activities described in CCMD campaign and associated subordinate and supporting plans. CCDRs must understand the OE, recognize nation-state use of proxies and surrogates, and be vigilant to the dangers posed by super-empowered threat networks. Super-empowered threat networks are networks that develop or obtain nation-state capabilities in terms of weapons, influence, funding, or lethal aid.

In combination with US diplomatic, economic, and informational efforts, the joint force must leverage partners and regional allies to foster cooperation in addressing transnational challenges.

  1. Challenges of the Strategic Security Environment
  2. The strategic security environment is characterized by uncertainty, complexity, rapid change, and persistent conflict. Advances in technology and information have facilitated individual non-state actors and networks to move money, people, and resources, and spread violent ideology around the world. Non-state actors are able to conduct activities globally and nation-states leverage proxies to launch and maintain sustained campaigns in remote areas of the world.

Alliances, partnerships, cooperative arrangements, and inter-network conflict may morph and shift week-to-week or even day-to- day. Threat networks or select components often operate clandestinely. The organizational construct, geographical location, linkages, and presence among neutral or friendly populations are difficult to detect during JIPOE, and once a rudimentary baseline is established, ongoing changes are difficult to track. This makes traditional intelligence collection and analysis, as well as operations and assessments, much more challenging than against traditional military threats.

  1. Deterring threat networks is a complex and difficult challenge that is significantly different from classical notions of deterrence. Deterrence is most classically thought of as the threat to impose such high costs on an adversary that restraint is the only rational conclusion. When dealing with violent extremist organizations and other threat networks, deterrence is likely to be ineffective due to radical ideology, diffuse organization, and lack of ownership of territory.

due to the complexity of deterring violent extremist organizations, flexible approaches must be developed according to a network’s ideology, organization, sponsorship, goals, and other key factors to clearly communicate that the targeted action will not achieve the network’s objectives.

  1. CTN represents a significant planning and operational challenge because threat networks use asymmetric methods and weapons and often enjoy state cooperation, sponsorship, sympathy, sanctuary, or supply. These networked threats transcend operational areas, areas of influence, areas of interest, and the information environment (to include cyberspace [network links and nodes essential to a particular friendly or adversary capability]). The US military is one of the instruments of US national power that may be employed in concert with interagency, international, and regional security partners to counter threat networks.
  2. Threat networks have the ability to remotely plan, finance, and coordinate attacks through global communications (to include social media), transportation, and financial networks. These interlinked areas allow for the high-speed, high-volume exchange of ideas, people, goods, money, and weapons.

“Terrorists and insurgents increasingly are turning to TOC [transnational organized crime] to generate funding and acquire logistical support to carry out their violent acts. While the crime-terror[ist] nexus is still mostly opportunistic, this nexus is critical nonetheless, especially if it were to involve the successful criminal transfer of WMD [weapons of mass destruction] material to terrorists or their penetration of human smuggling networks as a means for terrorists to enter the United States.”

Strategy to Combat Transnational Organized Crime, July 2011

using the global communications network, threat networks have demonstrated their ability to recruit like- minded individuals from outside of their operational area and have been successful in recruiting even inside the US and PNs. Many threat networks have mastered social media and tapped into the proliferation of traditional and nontraditional news media outlets to create powerful narratives, which generate support and sympathy in other countries. Cyberspace is equally as important to the threat network as physical terrain. Future operations will require the ability to monitor and engage threat networks within cyberspace, since this provides them an opportunity to coordinate sophisticated operations that advance their interests.

  1. Threat Networks and Levels of Warfare
  2. The purpose of CTN activities is to shape the security environment, deter aggression, provide freedom of maneuver within the operational area and its approaches, and, when necessary, defeat threat networks.

Supporting activities may include training, use of military equipment, subject matter expertise, cyberspace operations, information operations (IO) (use of information-related capabilities [IRCs]), military information support operations (MISO), counter threat finance (CTF), interdiction operations, raids, or civil-military operations.

In nearly all cases, diplomatic efforts, sanctions, financial pressure, criminal investigations, and intelligence community activities will complement military operations.

  1. Threat networks and their supporting network capabilities (finance, logistics, smuggling, command and control [C2], etc.) will present challenges to the joint force at the tactical, operational, and strategic levels due to their adaptive nature to conditions in the OE. Figure I-1 depicts some of the threat networks that may be operating in the OE and their possible impact on the levels of warfare.

Complex alliances between threat, neutral, and friendly networks may vary at each level, by agency, and in different geographic areas in terms of their membership, composition, goals, resources, strengths, and weaknesses. Strategically they may be part of a larger ideological movement at odds with several regional governments, have regional aspirations for power, or oppose the policies of nations attempting to achieve military stability in a geographic region.

Tactically, there may be local alliances with criminal networks, tribes, or clans that may not be ideologically aligned with one another, but could find common cause in opposing joint force operations in their area or harboring grievances against the host nation (HN) government. Analysis will be required for each level of warfare and for each network throughout the operational area. This analysis should be aligned with analysis from intelligence community agencies and international partners that often inject critical information that may impact joint planning and operations.

  1. The Strategic Approach
  2. The groundwork for successful CTN activities starts with information and intelligence to develop an understanding of the OE and the threat network.
  3. Current operational art and operational design as described within JPP is applicable to CTN. Threat networks tend to be difficult to collect intelligence on, analyze, and understand. Therefore, several steps within the operational approach methodology outlined in JP 5-0, Joint Planning, such as understanding the OE and defining the problem may require more resources and time.

JP 2-01.3, Joint Intelligence Preparation of the Operational Environment, provides the template for this process used to analyze all relevant aspects of the OE. Within operational design, determining the strategic, operational, and tactical COGs and decisive points of multiple threat networks will be more challenging than analyzing a traditional military force…

  1. Strategic and operational approaches require greater interagency coordination. This is critical for achieving unity of effort against threat network critical vulnerabilities (CVs) (see Chapter II, “Threat Network Fundamentals”). When analyzing networks, there will never be a single COG. The identification of the factors that comprise the COG(s) for a network will still require greater analysis, since each individual of the network may be motivated by different aspects. For example, some members may join a network for ideological reasons, while others are motivated by monetary gain. This aspect must be understood when analyzing human networks.
  2. Threat networks will adapt rapidly and sometimes “out of view” of intelligence collection efforts.

Intelligence sharing… must be complemented by integrated planning and execution to achieve the optimal operational tempo to defeat threats. Traditionally defined geographic operational areas, roles, responsibilities, and authorities often require greater cross-area coordination and adaptation to counter threat networks. Unity of effort seeks to synchronize understanding of and actions against a group’s or groups’ political, military, economic, social, information, and infrastructure (PMESII) systems as well as the links and nodes that are part of the group’s supporting networks.

  1. Joint Force and Interagency Coordination
  2. The USG and its partners face a wide range of local, national, and transnational irregular challenges to the stability of the international system. Successful deterrence of non- state actors is more complicated and less predictable than in the past, and non-state actors may derive significant capabilities from state sponsorship.
  3. Adaptingtoanincreasinglycomplexworldrequiresunityofefforttocounterviolent extremism and strengthen regional security.

To improve understanding, USG departments and agencies should strive to develop strong relationships while learning to speak each other’s language, or better yet, use a common lexicon.

  1. At each echelon of command, the actions taken to achieve stability vary only in the amount of detail required to create an actionable picture of the enemy and the OE. Each echelon of command has unique functions that must be synchronized with the other echelons, as part of the overall operation to defeat the enemy. Achieving synergy among diplomatic, political, security, economic, and information activities demands unity of effort between all participants. This is best achieved through an integrated approach. A common interagency assessment of the OE establishes a deep and shared understanding of the cultural, ideological, religious, demographic, and geographical factors that affect the conditions in the OE.
  2. Establishing a whole-of-government approach to achieve unity of effort should begin during planning. Achieving unity of effort is problematic due to challenges in information sharing, competing priorities, differences in lexicon, and uncoordinated activities.
  3. Responsibilities
  4. Operations against threat networks require unity of effort across the USG and multiple authorities outside DOD. Multiple instruments of national power will be operating in close proximity and often conducting complementary activities across the strategic, operational, and tactical levels. In order to integrate, deconflict, and synchronize the activities of these multiple entities, the commander should form a joint interagency coordination group, with representatives from all participants operating in or around the operational area.
  5. The military provides general support to a number of USG departments and agencies for their CTN activities ranging from CT to CD. A number of USG departments and agencies have highly specialized interests in threat networks, and their activities directly impact the military’s own CTN activities. For example, the Department of the Treasury’s CTF activities help to deny the threat network the funding needed to conduct operations.

CHAPTER II

THREAT NETWORK FUNDAMENTALS 1. Threat Network Construct

  1. Network Basic Components. All networks, regardless of size, share basic components and characteristics. Understanding common components and characteristics will help to develop and establish common joint terminology and standardize outcomes for network analysis, CTN planning, activities, and assessments across the joint force and CCMDs.
  2. Networks Terminology. A threat network consists of interconnected nodes and links and may be organized using subordinate and associated networks and cells. Understanding the individual roles and connections of each element is as important to conducting operations, as is understanding the overall network structure, known as the network topology.

Network boundaries must also be determined, especially when dealing with overlapping networks and global networks. Operations will rarely be possible against an entire threat or its supporting networks. Understanding the network topology allows planners to develop an operational approach and associated tactics necessary to create the desired effects against the network.

(1) Network. A network is a group of elements consisting of interconnected nodes and links representing relationships or associations. Sometimes the terms network and system are synonymous. This publication uses the term network to distinguish threat networks from the multitude of other systems, such as an air defense system, communications system, transportation system, etc.

(2) Cell. A cell is a subordinate organization formed around a specific process, capability, or activity within a designated larger organization.

(3) Node. A node is an element of a network that represents a person, place, or physical object. Nodes represent tangible elements within a network or OE that can be targeted for action. Nodes may fall into one or more PMESII categories.

(4) Link. A link is a behavioral, physical, or functional relationship between nodes.

Links establish the interconnectivity between nodes that allows them to work together as a network—to behave in a specific way (accomplish a task or perform a function). Nodes and links are useful in identifying COGs, networks, and cells the JFC may wish to influence or change during an operation.

  1. Network Analysis
  2. Network analysis is a means of gaining understanding of a group, place, physical object, or system. It identifies relevant nodes, determines and analyzes links between nodes, and identifies key nodes.

The PMESII systems perspective is a useful starting point for analysis of threat networks.

Network analysis facilitates identification of significant information about networks that might otherwise go unnoticed. For example, network analysis can uncover positions of power within a network, show the cells that account for its structure and organization, find individuals or cells whose removal would greatly alter the network, and facilitate measuring change over time.

  1. All networks are influenced by and in turn influence the OEs in which they exist. Analysts must understand the underlying conditions; the frictions between individuals and groups; familial, business, and governmental relationships; and drivers of instability that are constantly subject to change and pressures. All of these factors evolve as the networks change shape, increase or decrease capacity, and strive to influence and control things within the OE, and they contribute to or hinder the networks’ successes. Environmental framing is selecting, organizing, and interpreting and making sense of a complex reality; it serves as a guide for analyzing, understanding, and acting.
  2. Networks are typically formed at the confluence of three conditions: the presence of a catalyst, a receptive audience, and an accommodating environment. As conditions within the OE change, the network must adapt in order to maintain a minimal capacity to function within these conditions.

(1) Catalyst. A catalyst is a condition or variable within the OE that could motivate or bind a group of individuals together to take some type of action to meet their collective needs. These catalysts may be identified as critical variables as units conduct their evaluation of the OE and may consist of a person, idea, need, event, or some combination thereof. The potential exists for the catalyst to change based on the conditions of the OE.

(2) Receptive Audience. A receptive audience is a group of individuals that feel they have more to gain by engaging in the activities of the network than by not participating. Additionally, in order for a network to form, the members of the network must have the motivation and means to conduct actions that address the catalyst that generated the network. Depending on the type of network and how it is organized, leadership may or may not be necessary for the network to form, survive, or sustain collective action. The receptive audience originates from the human dimension of the OE.

(3) Accommodating Environment. An accommodating environment is the conditions within the OE that facilitate the organization and actions of a network. Proper conditions must exist within the OE for a network to form to fill a real or perceived need. Networks can exist for a time without an accommodating environment, but without it the network will ultimately fail.

  1. Networks utilize the PMESII system structure within the OE to form, survive and function. Like the joint force, threat networks will also have desired end states and objectives. As analysis is being conducted of the OE, the joint staff should identify the critical variables within the OE for the network. A critical variable is a key resource or condition present within the OE that has a direct impact on the commander’s objectives and may affect the formation and sustainment of networks.
  2. Determining and Analyzing Node-Link Relationships

Links are derived from data or extrapolations based on data. A benefit of graphically portraying node-link relationships is that the potential impact of actions against certain nodes can become more evident. Social network analysis (SNA) provides a method that helps the JFC and staff understand the relevance of nodes and links. Network mapping is essential to conducting SNA.

  1. Link Analysis. Link analysis identifies and analyzes relationships between nodes in a network. Network mapping provides a visualization of the links between nodes, but does not provide the qualitative data necessary to fully define the links.

During link analysis, the analyst examines the conditions of the relationship, strong or weak, informal or formal, formed by familial, social, cultural, political, virtual, professional, or any other condition.

  1. Nodal Analysis. Individuals are associated with numerous networks due to their individual identities. A node’s location within a network and in relation to other nodes carries identity, power, or belief and influences behavior.

Examples of these types of identities include locations of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. These individual attributes are often collected during identity activities and fused with attributes from unrelated collection activities to form identity intelligence (I2) products. Some aspects used to help understand and define an individual are directly related to the conditions that supported the development of relationships to other nodes.

  1. Network Analysis. Throughout the JIPOE process, at every echelon and production category, one of the most important, but least understood, aspects of analysis is sociocultural analysis (SCA). SCA is the study, evaluation, and interpretation of information about adversaries and relevant actors through the lens of group-level decision making to discern catalysts of behavior and the context that shapes behavior. SCA considers relationships and activities of the population, SNA (looking at the interpersonal, professional, and social networks tied to an individual), as well as small and large group dynamics.

SNA not only examines individuals and groups of individuals within a social structure such as a terrorist, criminal, or insurgent organization, but also examines how they interact. Interactions are often repetitive, enduring, and serve a greater purpose, and the interaction patterns affect behavior. If enough nodes and links information can be collected, behavior patterns can be observed and, to some extent, predicted.

SNA differs from link analysis because it only analyzes similar objects (e.g., people or organizations), not the relationships between the objects. SNA provides objective analysis of current and predicted network structure and interaction of networks that have an impact on the OE.

  1. Threat Networks and Cells

A network must perform a number of functions in order to survive and grow. These functions can be seen as cells that have their own internal organizational structure and communications. These cells work in concert to achieve the overall organization’s goals.

Networks do not exist in a vacuum. They normally share nodes and links with other networks. Each network may require a unique operational approach as they adapt to their OE or to achieve new objectives. They may form a greater number of cells if they are capable of independent operations consistent with the threat network’s overall operational goals.

They may move to a more hierarchical system due to lack of leadership, questions regarding loyalty of subordinates, or inexperienced lower-level personnel. Understanding these dimensions allows a commander to craft a more effective operational approach. These cells are examples only. The list is neither exclusive nor inclusive. Each network and cell will change, adapt, and morph over time.

  1. Operational Cells. Operational cells carry out the day-to-day operations of the network and are typically people-based (e.g., terrorists, guerrilla fighters, drug dealers). It is extremely difficult to gather intelligence on and depict every single node and link within an operational network. However, understanding key nodes, links, and cells that are particularly effective allows for precision targeting and greater effectiveness.
  2. Logistical Cells. Logistical cells provide threat networks the necessary supplies, weapons, ammunition, fuel, and military equipment to operate. Logistical cells are easier to observe and target than operational or communications cells since they move large amounts of material, which makes them more visible. These cells may include individuals who are not as ideologically motivated or committed as those in operational networks.

Threat logistical cells often utilize legitimate logistics nodes and links to hide their activities “in the noise” of legitimate supplies destined for a local or regional economy.

  1. Training Cells. Most network leaders desire to grow the organization for power, prestige, and advancement of their goals. Logistical cells may be used to move material, trainers, and trainees into a training area, or that portion of logistics may be a distinct part of the training cells.

Training requires the aggregation of new personnel and often includes physical structures to support activities which may also be visible and provide additional information to better understand the network.

  1. Communications Cells. Most threat networks have at minimum rudimentary communications cells for operational, logistical and financial purposes and another to communicate their strategic narrative to a target or neutral population.

The use of Internet-based social media platforms by threat networks increases the likelihood of gathering information, including geospatial information.

  1. Financial Cells. Threat networks require funding for every aspect of their activities, to maintain and expand membership, and to spread their message. Their financial cell moves money from legitimate and illegitimate business operations, foreign donors, and taxes collected or coerced from the population to the operational area.
  2. WMD Proliferation Cells. Many of these cells are not organized specifically for the proliferation of WMD. In fact, many existing cells may be utilized out of convenience. Examples of existing cells include human trafficking, counterfeiting, and drug trafficking.

The JFC should use a systems perspective to better understand the complexity of the OE and associated networks. This perspective looks across the PMESII systems to identify the nodes, links, COGs, and potential vulnerabilities within the network.

  1. Analyze the Network

Key nodes exist in every major network and are critical to their function. Nodes may be people, places, or things. For example, a town that is the primary conduit for movement of illegal narcotics would be the key node in a drug trafficking network. Some may become decisive points for military operations since, when acted upon, they could allow the JFC to gain a marked advantage over the adversary or otherwise to contribute materially to achieving success. Weakening or eliminating a key node should cause its related group of nodes and links to function less effectively or not at all, while strengthening the key node could enhance the performance of the network as a whole. Key nodes often are linked to, resident in, or influence multiple networks.

Node centrality can highlight possible positions of importance, influence, or prominence and patterns of connections. A node’s relative centrality is determined by analyzing measurable characteristics: degree, closeness, betweenness, and eigenvector.

CHAPTER III

NETWORKS IN THE OPERATIONAL ENVIRONMENT

“How many times have we killed the number three in al-Qaida? In a network, everyone is number three.”

Dr. John Arquilla, Naval Postgraduate School

  1. Networked Threats and Their Impact on the Operational Environment
  2. In a world increasingly characterized by volatility, uncertainty, complexity, and ambiguity, a wide range of local, national, and transnational irregular challenges to the stability of the international system have emerged. Traditional threats like insurgencies and criminal gangs have been exploiting weak or corrupt governments for years, but the rise of transnational extremists and their active cooperation with traditional threats has changed the global dynamic.
  3. All networks are vulnerable, and a JFC and staff armed with a comprehensive understanding of a threat network’s structure, purpose, motivations, functions, interrelationships, and operations can determine the most effective means, methods, and timing to exploit that vulnerability.

Network analysis and exploitation are not simple tasks. Networked threats are highly adaptable adversaries with the ability to select a variety of tactics, techniques, and technologies and blend them in unconventional ways to meet their strategic aims. Additionally, many threat networks supplant or even replace legitimate government functions such as health and social services, physical protection, or financial support in ungoverned or minimally governed areas. This de facto governance of an area by a threat network makes it more difficult for the joint force to simultaneously attack a threat and meet the needs of the population.

  1. Once the JFC identifies the networks in the OE and understands their interrelationships, functions, motivations, and vulnerabilities, the commander tailors the force to apply the most effective tools against the threat.

the JTF requires active support and participation by USG, HN, nongovernmental agencies, and partners, particularly when it comes to addressing cross-border sanctuary, arms flows, and the root causes of instability. This “team of teams” approach facilitates unified action, which is essential for organizing for operations against an adaptive threat.

  1. Threat Network Characteristics

Threat networks do not differ much from non-threat networks in their functional organization and requirements. Threat networks manifest themselves and interact with neutral networks for protection, to perpetuate their goals, and to expand their influence. Networks involving people have been described as insurgent, criminal, terrorist, social, political, familial, tribal, religious, academic, ethnic, or demographic. Some non-human networks include communications, financial, business, electrical/power, water, natural resources, transportation, or informational. Networks take many forms and serve different purposes, but are all comprised of people, processes, places, material, or combinations. Individual network components are identifiable, targetable, and exploitable. Almost universally, humans are members of more than one network, and most networks rely on other networks for sustainment or survival.

Organized threats leverage multiple networks within the OE based on mission requirements or to achieve objectives not unilaterally achievable. The following example shows some typical networks that a threat will use and/or exploit. This “network of networks” is always present and presents challenges to the JFC when planning operations to counter threats that nest within various friendly, neutral, and hostile networks

  1. Adaptive Networked Threats

For a threat network to survive political, economic, social, and military pressures, it must adapt to those pressures. Survival and success are directly connected to adaptability and the ability to access financial, logistical, and human resources. Networks possess many characteristics important to their success and survival, such as flexible C2 structure; a shared identity; and the knowledge, skills, and abilities of group leaders and members to adapt. They must also have a steady stream of resources and may require a sanctuary (safe haven) from which to regroup and plan.

  1. C2 Structure. There are many potential designs for the threat network’s internal organization. Some are hierarchical, some flat, and others may be a combination. The key is that to survive, networks adapt continuously to changes in the OE, especially in response to friendly actions. Commanders must be able to recognize changes in the threat’s C2 structures brought about by friendly actions and maintain pressure to prevent a successful threat reconstitution.
  2. Shared Identity. Shared identity among the membership is normally based on kinship, ideology, religion, and personal relationships that bind the network and facilitate recruitment. These identity attributes can be an important part of current and future identity activities efforts, and analysis can be initiated before hostilities are imminent.
  3. Knowledge, Skills, and Abilities of Group Leaders and Members. All threat networks have varying degrees of proficiency. In initial stages of development, a threat organization and its members may have limited capabilities. An organization’s survival rests on the knowledge, skills, and abilities of its leadership and membership. By seeking out subject matter expertise, financial backing, or proxy support from third parties, an organization can increase their knowledge, skills, and abilities, making them more adaptable and increasing their chance for survival.
  4. Resources. Resources in the form of arms, money, technology, social connectivity, and public recognition are used by threat networks. Identification and systematic strangulation of threat resources is the fundamental principle for CTN. For example, money is one of the critical resources of adversary networks. Denying the adversary its finances makes it harder, and perhaps impossible to pay, train, arm, feed, and clothe forces or gather information and produce the propaganda.
  5. Adaptability. This includes the ability to learn and adjust behaviors; modify tactics, techniques, and procedures (TTP); improve communications security and operations security; successfully employ IRCs; and create solutions for safeguarding critical nodes and reconstituting expertise, equipment, funding, and logistics lines that are lost to friendly disruption efforts. Analysts conduct trend analysis and examine key indicators within the OE that might suggest how and why networks will change and adapt. Disruption efforts will often provoke a network’s changing of its methods or practices, but often external influences, local relationships and internal friction, geographic and climate challenges, and global economic factors might also be some of the factors that motivate a threat network to change or adapt to survive.
  6. Sanctuary (Safe Havens). Safe havens allow the threat networks to conduct planning, training, and logistic reconstitution. Threat networks require certain critical capabilities (CCs) to maintain their existence, not the least of which are safe havens from which to regenerate combat power and/or areas from which to launch attacks.
  7. Network Engagement
  8. Network engagement is the interactions with friendly, neutral, and threat networks, conducted continuously and simultaneously at the tactical, operational, and strategic levels, to help achieve the commander’s objectives within an OE. To effectively counter threat networks, the joint force must seek to support and link with friendly networks and engage neutral networks through the building of mutual trust and cooperation through network engagement.
  9. Network engagement consists of three components: partnering with friendly networks, engaging neutral networks, and CTN to support the commander’s desired end state.
  10. Individuals may be associated with numerous networks due to their unique identities. Examples of these types of identities include location of birth, family, religion, social groups, organizations, or a host of various characteristics that define an individual. Therefore, it is not uncommon for an individual to be associated with more than one type of network (friendly, neutral, or threat). Individual identities provide the basis that allows for the interrelationship between friendly, neutral, and threat networks to exist. It is this interrelationship that makes categorizing networks a challenge. Classifying a network as friendly or neutral when in fact it is a threat may provide the network with too much freedom or access. Mislabeling a friendly or neutral network as a threat may cause actions to be taken against that network that can have unforeseen consequences.
  11. Networks are comprised of individuals who are involved in a multitude of activities, including social, political, monetary, religious, and personal. These human networks exist in every OE, and therefore network engagement activities will be conducted throughout all phases of the conflict continuum and across the range of operations.
  12. Networks, Links, and Identity Groups

All individuals are members of multiple, overlapping identity groups (see Figure III-3). These identity groups form links of affinity and shared understanding, which may be leveraged to form networks with shared purpose

Many threat networks rely on family and tribal bonds when recruiting for the network’s inner core. These members have been vetted for years and are almost impossible to turn. For analysts, identifying family and tribal affiliations assists in developing a targetable profile on key network personnel. Even criminal networks will tend to be densely populated by a small number of interrelated identity groups.

  1. Family Network. Some members or associates have familial bonds. These bonds may be cross-generational.
  2. Cultural Network. Network links can share affinities due to culture, which include language, religion, ideology, country of origin, and/or sense of identity. Networks may evolve over time from being culturally based to proximity based.
  3. Proximity Network. The network shares links due to geographical ties of its members (e.g., past bonding in correctional or other institutions or living within specific regions or neighborhoods). Members may also form a network with proximity to an area strategic to their criminal interests (e.g., a neighborhood or key border entry point). There may be a dominant ethnicity within the group, but they are primarily together for reasons other than family, culture, or ethnicity.
  4. Virtual Network. A network that may not physically meet but work together through the Internet or other means of communication, for legitimate or criminal purposes (e.g., online fraud, theft, or money laundering).
  5. Specialized Networks. Individuals in this network come together to undertake specific activities based on the skills, expertise, or particular capabilities they offer. This may include criminal activities.
  6. Types of Networks in an Operational Environment

There are three general types of networks found within an operational area: friendly, neutral, and hostile/threat networks. A network may also be in a state of transition and therefore difficult to classify.

  1. Threat networks

Threat networks may be formally intertwined or come together when mutually beneficial. This convergence (or nexus) between threat networks has greatly strengthened regional instability and allowed threats and alliances to increase their operational reach and power to global proportions.

  1. Identify a Threat Network

Threat networks often attempt to remain hidden. How can commanders determine not only which networks are within an operational area, but also which pose the greatest threat?

By understanding the basic, often masked sustainment functions of a given threat network, commanders may also identify individual networks within. For example, all networks require communications, resources, and people. By understanding the functions of a network, commanders can make educated assumptions as to their makeup and determine not only where they are, but also when and how to engage them. As previously stated, there are many neutral networks that are used by both friendly and threat forces; the difficult part is determining what networks are a threat and what networks are not. The “find” aspect of the find, fix, finish, exploit, analyze, and disseminate (F3EAD) targeting methodology is initially used to discover and identify networks within the OE. The F3EAD methodology is not only used for identifying specific actionable targets; it is also used to uncover the nature, functions, structures, and numbers of networks within the OE. A thorough JIPOE product, coupled with “on-the-ground” assessment, observation, and all-source intelligence collection, will ultimately lead to an understanding of the OE and will allow the commander to visualize the network.

CHAPTER IV

PLANNING TO COUNTER THREAT NETWORKS

  1. Joint Intelligence Preparation of the Operational Environment and Threat Networks
  2. A comprehensive, multidimensional assessment of the OE will assist commanders and staffs in uncovering threat network characteristics and activities, develop focused operations to attack vulnerabilities, better anticipate both the intended and unintended consequences of threat network activities and friendly countermeasures, and determine appropriate means to assess progress toward stated objectives.
  3. Joint force, component, and supporting commands and staffs use JIPOE products to prepare estimates used during mission analysis and selection of friendly courses of action (COAs). Commanders tailor the JIPOE analysis based on the mission. As previously discussed, the best COA may not be to destroy a threat’s entire network or cells; friendly or neutral populations may use the same network or cells, and to destroy it would have a negative effect.
  4. Understanding the Threat’s Network
  5. The threat has its own version of the OE that it seeks to shape to maintain support and attain its goals. In many instances, the challenge facing friendly forces is complicated by the simple fact that significant portions of a population might consider the threat as the “home team.” To neutralize or defeat a threat network, friendly forces must do more than understand how the threat network operates, its organization goals, and its place in the social order; they must also understand how the threat is shaping its environment to maintain popular support, recruit, and raise funds. The first step in understanding a network is to develop a network profile through analysis of a network’s critical factors.
  6. COG and Critical Factors Analysis (CFA). One of the most important tasks confronting the JFC and staff during planning is to identify and analyze the threat’s network, and in most cases the network’s critical factors (see Figure IV-1) and COGs.
  7. Network Function Template. Building a network function template is a method to organize known information about the network associated with structure and functions of the network. By developing a network function template, the information can be initially understood and then used to facilitate CFA. Building a network function template is not a requirement for conducting CFA, but helps the staff to visualize the interactions between functions and supporting structure within a network.
  8. Critical Factors Analysis
  9. CFA is an analytical framework to assist planners in analyzing and identifying a COG and to aid operational planning. The critical factors are the CCs, critical requirements (CRs), and CVs.

Key terminology for CFA includes:

(1) COG for network analysis is a conglomeration of tangible items and/or intangible factors that not only motivates individuals to join a network, but also promotes their will to act to achieve the network’s objectives and attain the desired end state. A COG for networks will often be difficult to target directly due to complexity and accessibility.

(2) CCs are the primary abilities essential to accomplishing the objective of the network within a given context. Analysis to identify CCs for a network is only possible with understanding the structure and functions of a network, which is supported by other network analysis methods.

(3) CRs are the essential conditions, resources, and means the network requires to perform the CC. These things are used or consumed to carry out action, enabling a CC to wholly function. Networks require resources to take action and function. These resources include personnel, equipment, money, and any other commodity that support the network’s CCs.

(4) CVs are CRs or components thereof that are deficient or vulnerable to neutralization, interdiction, or attack in a manner that achieves decisive results. A network’s CVs will change as networks adapt to conditions within the OE. Identification of CVs for a network should be considered during the targeting process, but may not necessarily be a focal point of operations without further analysis.

  1. Building a network function template involves several steps:

(1) Step 1: Identify the network’s desired end state. The network’s desired end state is associated with the catalyst that supported the formation of the network. The primary question that the staff needs to answer is what are the network’s goals? The following are examples of desired end states for various organizations:

(a) Replacing the government of country X with an Islamic caliphate.

(b) Liberating country X.
(c) Controlling the oil fields in region Y.
(d) Establishing regional hegemony.

(e) Imposing Sharia on village Z.

(f) Driving multinational forces out of the region.

 

(2) Step2: Identify possible ways or actions (COAs) that can attain the desired end state. This step refers to ways a network can take actions to attain their desired end state through their COAs. Similar in nature to how staffs analyze a conventional force to determine the likely COA that force will take, this must also be done for the networks that are selected for engagement. It is important to note that each network will have a variety of options available to them and their likely COA will be associated with the intent of the members of the network. Examples of ways for some networks may include:

(a) Conducting an insurgency operation or campaign. (b) Building PN capacity.
(c) Attacking with conventional military forces.
(d) Conducting acts of terrorism.

(e) Seizing the oil fields in Y.
(f) Destroying enemy forces.
(g) Defending village Z.
(h) Intimidating local leaders.
(i) Controlling smuggling routes.

(j) Bribing officials

 

(3) Step 3: Identify the functions that the network possesses to take actions. Using the network function template from previous analysis, the staff must refine this analysis to identify the functions within the network that could be used to support the potential ways or COAs for the network. The functions identified result in a list of CCs. Examples of items associated with the functions of a network that would support the example list of ways identified in the previous step are:

(a) Conducting an insurgency operation or campaign: insurgents are armed and can conduct attacks.

(b) Building PN capacity: forces and training capability available.

(c) Attacking with conventional military forces: military forces are at an operational level with C2 in place.

(d) Conducting acts of terrorism: network members possess the knowledge and assets to take action.

(e) Seizing the oil fields in Y: network possesses the capability to conduct coordinated attack.

(f) Destroying enemy forces: network has the assets to identify, locate, and destroy enemy personnel.

(g) Defending village Z: network possesses the capabilities and presence to conduct defense.

(h) Intimidating local leaders: network has freedom of maneuver and access to local leaders.

(i) Controlling smuggling routes: network’s sphere of influence and capabilities allow for control.

(j) Bribing officials: network has access to officials and resources to facilitate

bribes

(4) Step4:Listthemeansorresourcesavailableorneededforthenetworkto execute CCs. The purpose of this step is to determine the CRs for the network. Again, this is support from the initial analysis conducted for the network, network mapping, link analysis, SNA, and network function template. Based upon the CCs identified for the network, the staff must answer the question what resources must the network possess to employ the CCs identified? The list of CRs can be extensive, depending on the capability being analyzed. The following are examples of CRs that may be identified for a network:

(a) A group of foreign fighters.
(b) A large conventional military.
(c) A large conventional military formation (e.g., an armored corps). (d) IEDs.
(e) Local fighters.
(f) Arms and ammunition.
(g) Funds.
(h) Leadership.
(i) A local support network.

(5) Step 5: Correlate CCs and CRs to OE evaluation to identify critical variables.

(a) Understanding the CCs and CRs for various networks can be used alone in planning and targeting, but the potential to miss opportunities or accept additional risks are not understood until the staff relates these items to the analysis of the OE.

(b) A critical variable may be a CC, CR, or CV for multiple networks. Gaining an understanding of this will occur in the next step of CFA. The following are examples of critical variables that may be identified for networks:

  1. A group of foreign fighters is exposed for potential engagement.
  2. A large conventional military formation (e.g., an armored corps) is located and likely COA is identified.
  3. IED maker and resources are identified and can be neutralized. 4. Local fighters’ routes of travel and recruitment are identifiable. 5. Arms and ammunition sources of supply are identifiable.
    6. Funds are located and potential exists for seizure.
  4. Leadership is identified and accessible for engagement.
  5. A local support network is identified and understood through analysis.

(6) Step 6: Compare and contrast the CRs for each network analyzed. This step of CFA can only be accomplished after full network analysis has been completed for all selected networks within the OE. To compare and contrast, the information from the analysis of each network must be available. The intent of correlating the critical variables for each network allows for understanding:

(a) Potential desired first- and second-order effects of engagement.

(b) Potential undesired first- and second-order effects of engagement.

(c) Direct engagement opportunities.
(d) Indirect engagement opportunities.

(7) Step 7: Identify CVs for the network. Identifying CVs of a network is completed by analyzing each CR for the network with respect to criticality, accessibility, recuperability, and adaptability. This analysis is conducted from the perspective of the network with consideration of threats within the OE that may impact the network being

analyzed. Conducting the analysis from this perspective allows staffs to identify CVs for any type of network (friendly, neutral, or threat).

(a) Criticality. A CR that when engaged by a threat results in a degradation of the network’s structure, function or impact on its ability to sustain itself. Criticality considers the importance of the CR to the network and the following questions should be considered when conducting this analysis:

  1. What impact will removing the CR have on the structure of the network?
  2. What impact will removing the CR have on the functions of the network?
  3. What function is affected by engaging the CR?
    4. What effect does the CR have on other networks?
    5. Is the CR a CR for other networks? If so, which ones?
  4. How is the CR related to conditions of sustainment?

 

(b) Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Accessibility of the CR in some cases is a limiting factor for the true vulnerability of a CR.

The following questions should be considered by the staff when analyzing a CR for accessibility:

  1. Where is the CR?
  2. Is the CR protected?
  3. Is the CR static or mobile?
  4. Who interacts with the CR? How often?
  5. Is the CR in the operational area of the threat to the network?
  6. Can the CR be engaged with threat capabilities?
  7. If the CR is inaccessible, are there alternative CRs that if engaged by a threat result in a similar effect on the network?

(c) Recuperability. The amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Analyzing the CR in regard to recuperability is associated to the network’s ability to regenerate when components of its structure have been removed or damaged. This plays a role in the adaptive nature of a network, but must not be confused with the last aspect of the analysis for CVs. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. If CR is removed:
    a. Can the CR be replaced?
  2. How long will it take to replace?
    c. Does the replacement fulfill the network’s structural and functional levels?
  3. Will the network need to make adjustments to implement the replacement for the CR?
  4. If CR is damaged:
    a. Can the CR be repaired?
    b. How long will it take to repair?
    c. Will the repaired CR return the network to its previous structural and functional levels?

(d) Adaptability. The ability of a network (with which the CR is associated) to change in response to conditions in the OE brought about by the actions of a threat taken against it, while maintaining its structure and function.

Adaptability considers the network’s ability to change or modify their functions, modify their catalyst, shift focus on potential receptive audience(s), or make any other changes to adapt to the conditions in the OE. The following questions should be considered by the staff when analyzing a CR for recuperability:

  1. Can the CR change its structure while maintaining its function?
  2. Is the CR tied to a CC that could cause it to adapt as a normal response to a change in a CC (whether due to hostile engagement or a natural change brought about by a friendly network’s adjustment to that CC)?
  3. Can the CR be changed to fulfill an emerging CC or function for the network?

 

  1. Visualizing Threat Networks
  2. Mapping the Network. Mapping threat networks starts by detailing the primary threats (e.g., terrorist group, drug cartel, money-laundering group). Mapping routinely starts with people and places and then adds functions, resources, and activities.

Mapping starts out as a simple link between two nodes and progresses to depict the organizational structure (see Figure IV-4). Individual network members themselves may not be aware of the organizational structure. It will be rare that enough intelligence and information is collected to portray an entire threat network and all its cells.

This will be a continuous process as the networks themselves transform and adapt to their environment and the joint force operations. To develop and employ theater-strategic options, the commander must understand the series of complex, interconnected relationships at work within the OE.

(1) Chain Network. The chain or line network is characterized by people, goods, or information moving along a line of separated contacts with end-to-end communication traveling through intermediate nodes.

(2) Star or Hub Network. The hub, star, or wheel network, as in a franchise or a cartel, is characterized by a set of actors tied to a central (but not hierarchical) node or actor that must communicate and coordinate with network members through the central node.

(3) All-Channel Network. The all-channel, or full-matrix network, is characterized by a collaborative network of groups where everybody connects to everyone else.

  1. Mapping Multiple Networks. Each network may be different in structure and purpose. Normally the network structure is fully mapped, and cells are shown as they relate to the larger network. It is time- and labor-intensive to map each network, so staffs should carefully consider the usefulness for how much time and effort they should allocate toward mapping the supporting networks and where to focus their efforts so that they are both providing a timely response and accurately identifying relationships and critical nodes significant for disruption efforts.
  2. Identify the Influencing Factors of the Network. Influencing factors of the network (or various networks) within an OE can be identified largely by the conditions created by the activities of the network. These conditions are what influence the behaviors, attitudes, and vulnerabilities of specific populations. Factors such as threat information activities (propaganda) may be one of the major influencers, but so are activities such as kidnapping, demanding protection payments, building places of worship, destroying historical sites, building schools, providing basic services, denying freedom of movement, harassment, illegal drug activities, prostitution, etc. To identify influencing factors, a proven method is to first look at the conditions of a specific population or group, determine how those conditions create/force behavior, and then determine the causes of the conditions. Once influence factors are identified, the next step is to determine if the conditions can be changed and/or if they cannot, determine if there is alternative, viable behavior available to the population or group.
  3. To produce a holistic view of threat, neutral, and friendly networks as a whole within a larger OE requires analysis to describe how these networks interrelate. Most important to this analysis is describing the relationships within and between the various networks that directly or indirectly affect the mission.
  4. Collaboration. Within most efforts to produce a comprehensive view of the networks, certain types of data or information may not be available to correctly explain or articulate with great detail the nature of relationships, capabilities, motives, vulnerabilities, or communications and movements. It is incumbent upon intelligence organizations to collaborate and share information, data, and analysis, and to work closely with interagency partners to respond to these intelligence gaps.
  5. Targeting Evaluation Criteria

Once the network is mapped, the JFC and staff identify network nodes and determine their suitability for targeting. A useful tool in determining a target’s suitability for attack is the criticality, accessibility, recuperability, vulnerability, effect, and recognizability (CARVER) analysis (see Figure IV-5). CARVER is a subjective and comparative system that weighs six target characteristic factors and ranks them for targeting and planning decisions. CARVER analysis can be used at all three levels of warfare: tactical, operational, and strategic. Once target evaluation criteria are established, target analysts use a numerical rating system (1 to 5) to rank the CARVER factors for each potential target. In a one to five numbering system, a score of five would indicate a very desirable rating while a score of one would reflect an undesirable rating.

A notional network-related CARVER analysis is provided in paragraph 6, “Notional Network Evaluation.” The CARVER method as it applies to networks provides a graph-based numeric model for determining the importance of engaging an identified target, using qualitative analysis, based on seven factors:

  1. Network Affiliations. Network affiliations identify each network of interest associated with the CR being evaluated. The importance of understanding the network affiliations for a potential target stems from the interrelationships between networks. Evaluating a potential target from the perspective of each affiliated network will provide the joint staff with potential second- and third-order effects on both the targeted threat networks and other interrelated networks within the OE.
  2. Criticality. Criticality is a CR that when engaged by a threat results in a degradation of the network’s structure, function, or impact on its ability to sustain itself. Evaluating the criticality of a potential target must be accomplished from the perspective of the target’s direct association or need for a specific network. Depending on the functions and structure of the network, a potential target’s criticality may differ between networks. Therefore, criticality must be evaluated and assigned a score for each network affiliation. If the analyst has completed CFA for the networks of interest, criticality should have been analyzed during the identification of CVs.
  3. Accessibility. A CR is accessible when capabilities of a threat to the network can be directly or indirectly employed to engage the CR. Inaccessible CRs may require alternate target(s) to produce desired effects. The accessibility of a potential target will remain the same, regardless of network affiliation. This element of CARVER does not require a separate evaluation of the potential target for each network. Much like criticality, accessibility will have been evaluated if the analyst has conducted CFA for the network as part of the analysis for the network.
  4. Recuperability. Recuperability is the amount of time that the network needs to repair or replace a CR that is engaged by a threat capability. Recuperability is analyzed during CFA to determine the vulnerability of a CR for the network. Since CARVER (network) is applied to evaluate the potential targets with each affiliated network, the evaluation for recuperability will differ for each network. What affects recuperability is the network’s function of regenerating members or replacing necessary assets with suitable substitutes.
  5. Vulnerability. A target is vulnerable if the operational element has the means and expertise to successfully attack the target. When determining the vulnerability of a target, the scale of the critical component needs to be compared with the capability of the attacking element to destroy or damage it. The evaluation of a potential target’s vulnerability is supported by the analysis conducted during CFA and can be used to complete this part of the CARVER (network) matrix. Vulnerability of a potential target will consist of only one value. Regardless of the network of affiliation, vulnerability is focused on evaluating available capabilities to effectively conduct actions on the target.
  6. Effect. This evaluates the potential effect on the structure, function, and sustainment of a network of engaging the CR as it relates to each affiliated network. The level of effect should consider both the first-order effect on the target itself, as well as the second-order effect on the structure and function of the network.
  7. Recognizability.RecognizabilityisthedegreetowhichaCRcanberecognizedby an operational element and/or intelligence collection under varying conditions. The recognizability of a potential target will remain the same, regardless of network of affiliation.
  8. Notional Network Evaluation
  9. The purpose of conventional target analysis (and the use of CARVER) is to determine enemy critical systems or subsystems to attack to progressively destroy or degrade the adversary’s warfighting capacity and will to fight.
  10. Using network analysis, a commander identifies the critical threat nodes operating within the OE. A CARVER analysis determines the feasibility of attacking each node (ideally simultaneously). While each CARVER value is subjective, detailed analysis allows planners to assign a realistic value.

The commander and the staff then look at other aspects of the network and, for example, determine whether they can disrupt the material needed for training, prevent the movement of trainees or trainers to the training location, or influence other groups to deny access to the area.

  1. The JFC and staff methodically analyze each identified network node and assign a numerical rating to each. In this notional example (see Figure IV-7), it is determined that the communications cells and those who finance threat operations provide the best targets to attack.
  2. Planning operations against threat networks does not differ from standard military planning. These operations still support the JFC’s broader mission and rarely stand alone. Identifying threat networks requires detailed analysis and consideration for second- and third-order effects. It is important to remember that the threat organization itself is the ultimate target and their networks are merely a means to achieve that. Neutralizing a given network may prove more beneficial to the JFC’s mission accomplishment than destroying a single multiuser network node. The most effective plans call for simultaneous operations against networks focused on multiple nodes and network functions.
  3. Countering Threat Networks Through the Planning of Phases

As previously discussed, commanders execute CTN activities across all levels of warfare.

Threat networks can be countered using a variety of approaches and means. Early in the operation or campaign, the concept of operations will be based on a synchronized and integrated international effort (USG, PNs, and HN) to ensure that conditions in the OE do not empower a threat network and to deny the network the resources it requires to expand its operations and influence. As the threat increases and conditions deteriorate, the plan will adjust to include a broader range of actions, and an increase in the level and focus of targeting of identified critical network nodes: people and activities. Constant pressure must be maintained on the network’s critical functions to deny them the initiative and disrupt their operating tempo.

Figure IV-8 depicts the notional operation plan phase construct for joint operations. Some phases may not be used during CTN activities.

  1. Shape (Phase 0)

(1) Unified action is the key to shaping the OE. The goal is to deny the threat network the resources needed to expand their operations and reduce it to a point where they no longer pose a direct threat to regional/local stability, while influencing the network to reduce or redirect its threatening objectives. Shaping operations against threat networks consist of efforts to influence their objectives, dissuade growth, state sponsorship, sanctuary, or access to resources through the unified efforts of interagency, regional, and international partners as well as HN civil authorities. Actions are taken to identify key elements in the OE that can be used to leverage support for the government or other friendly networks that must be controlled to deny the threat an operational advantage. The OE must be analyzed to identify support for the threat network, as well as that for the relevant friendly and neutral networks. Interagency/international partners help to identify the network’s key components, deny access to resources (usually external to the country), and persuade other actors (legitimate and illicit) to discontinue support for the threat. SIGINT, open-source intelligence (OSINT), and human intelligence (HUMINT) are the primary intelligence sources of actionable information. The legitimacy of the government must be reinforced in the operational area. Efforts to reinforce the government seek to identify the sources of friction within the society that can be reduced through government intervention.

Many phase I shaping activities need to be coordinated during phase 0 due to extensive legal and interagency requirements.

Due to competing resources and the potential lack of available IRCs, executing IO during phase 0 can be challenging. For this reason, consideration must be given on how IRCs can be integrated as part of the whole-of-government approach to effectively shape the information environment and to achieve the commander’s information objectives.

Shaping operations may also include security cooperation activities designed to strengthen PN or regional capabilities and capacity that contribute to greater stability. Shaping operations should focus on changing the conditions that foster the development of adversaries and threats.

(2) During phase 0 (shaping), the J-2’s threat network analysis initially provides a broad description of the structure of the underlying threat organization; identifies the critical functions, nodes, and the relationships between the threat’s activities and the greater society; and paints a picture of the “on-average” relationships.

Some of the CTN actions require long- term and sustained efforts, such as addressing recruitment in targeted communities through development programming. It is essential that the threat is decoupled from support within the affected societies. Critical elements in the threat’s operational networks must be identified and disrupted to affect their operating tempo. Even when forces are committed, the commander continues to shape the OE using various means to eliminate the threat and undertake actions, in cooperation with interagency and multinational partners, to reinforce the legitimate government in the eyes of the population.

(3) The J-2 seeks to identify and leverage information sources that can provide details on the threat network and its relationship to the regional/local political, economic, and social structures that can support and sustain it.

(4) Sharing information and intelligence with partners is paramount since collection, exploitation, and analysis against threat networks requires much greater time than against traditional military adversaries. Information sharing with partners must be balanced with operations security and cannot be done in every instance. Intelligence sharing between CCDRs across regional and functional seams provides a global picture of threat networks not bound by geography. Intelligence efforts within the shaping phase show threat network linkages in terms of leadership, organization, size, scope, logistics, financing, alliances with other networks, and membership.

  1. Deter (Phase I). The intent of this phase is to deter threat network action, formation, or growth by demonstrating partner, allied, multinational, and joint force capabilities and resolve. Many actions in the deter phase include security cooperation activities and IRCs and/or build on security cooperation activities from phase 0. Increased cooperation with partners and allies, multinational forces, interagency and interorganizational partners, international organizations, and NGOs assist in increasing information sharing and provide greater understanding of the nature, capabilities, and linkages of threat networks.

enhance deterrence through unified action by collaborating with all friendly elements and by creating a friendly network of organizations and people with far-reaching capabilities and the ability to respond with pressure at multiple points against the threat network.

Phase I begins with coordination activities to influence threat networks on multiple fronts.

Deterrent activities executed in phase I also prepare for phase II by conducting actions throughout the OE to isolate threat networks from sanctuary, resources, and information networks and increase their vulnerability to later joint force operations.

  1. Seize Initiative (Phase II). JFCs seek to seize the initiative through the application of joint force capabilities across multiple LOOs.

Destruction of a single node or cell might do little to impact network operations when assessed against the cost of operations and/or the potential for collateral damage.

As in traditional offensive operations against a traditional adversary, various operations create conditions for exploitation, pursuit, and ultimate destruction of those forces and their will to fight.

  1. Dominate (Phase III). The dominate phase against threat networks focuses on creating and maintaining overwhelming pressure against network leadership, finances, resources, narrative, supplies, and motivation. This multi-front pressure should include diplomatic and economic pressure at the strategic level and informational pressure at all levels.

They are then synchronized with military operations conducted throughout the OE and at all levels of warfare to achieve the same result as traditional operations, to shatter enemy cohesion and will. Operations against threat networks are characterized by dominating and controlling the OE through a combination of traditional warfare, irregular warfare, sustained employment of interagency capabilities, and IRCs.

  1. Stabilize (Phase IV). The stabilize phase is required when there is no fully functioning, legitimate civilian governing authority present or the threat networks have gained political control within a country or region. In cases where the threat network is government aligned, its defeat in phase III may leave that government intact, and stabilization or enablement of civil authority may not be required. After neutralizing or defeating the threat networks (which may have been functioning as a shadow government), the joint force may be required to unify the efforts of other supporting/contributing multinational, international organization, NGO, or USG department and agency participants into stability activities to provide local governance, until legitimate local entities are functioning.
  2. Enable Civil Authority (Phase V). This phase is predominantly characterized by joint force support to legitimate civil governance in the HN. Depending upon the level of HN capacity, joint force activities during phase V may be at the behest or direction of that authority. The goal is for the joint force to enable the viability of the civil authority and its provision of essential services to the largest number of people in the region. This includes coordinating joint force actions with supporting or supported multinational and HN agencies and continuing integrated finance operations and security cooperation activities to favorably influence the target population’s attitude regarding local civil authority’s objectives.

CHAPTER V

ACTIVITIES TO COUNTER THREAT NETWORKS

“Regional players almost always understand their neighborhood’s security challenges better than we do. To make capacity building more effective, we must leverage these countries’ unique skills and knowledge to our collect[ive] advantage.”

General Martin Dempsey, Chairman of the Joint Chiefs of Staff

Foreign Policy, 25 July 2014, The Bend of Power

 

  1. The Challenge

A threat network can be operating for years in the background and suddenly explode on the scene. Identifying and countering potential and actual threat networks is a complex challenge.

  1. Threat networks can take many forms and have many distinct participants from terrorists, to criminal organizations, to insurgents, locally or transnationally based…

Threat networks may leverage technologies, social media, global transportation and financial systems, and failing political systems to build a strong and highly redundant support system. Operating across a region provides the threat with a much broader array of resources, safe havens, and flexibility to react to attack and prosecute their attacks.

To counter a transnational threat, the US and its partners must pursue multinational cooperation and joint operations to achieve disruption and cooperate with HNs within a specified region in order to fully identify, describe, and mitigate via multilateral operations the transnational networks that threaten an entire region and not just individual HNs.

  1. Successfuloperationsarebasedontheabilityoffriendlyforcestodevelopandapply a detailed understanding of the structure and interactions of the OE to the planning and execution of a wide array of capabilities to reinforce the HN’s legitimacy and neutralize the threat’s ability to threaten that society.
  2. Targeting Threat Networks
  3. The commander and staff must understand the desired condition of the threat network as it relates to the commander’s objectives and desired end state as the first step of targeting any threat network.
  4. The military end state that is desired is directly related to conditions of the OE. Interrelated human networks comprise the human aspect of the OE, which includes the threat networks that are to be countered. The actual targeting of threat networks begins early in the planning process, since all actions taken must be supportive in achieving the commander’s objectives and attaining the end state. To feed the second phase of the targeting cycle, the threat network must be analyzed using network mapping, link analysis, SNA, CFA, and nodal analysis.
  5. The second phase of the joint targeting cycle is intended to begin the development of target lists for potential engagement. JIPOE is one of the critical inputs to support the development of these products, but must include a substantial amount of analysis on the threat network to adequately identify the critical nodes, CCs (network’s functions), and CRs for the network.

Similar to developing an assessment plan for operations as part of the planning process, the metrics for assessing networks must be developed early in the targeting cycle.

  1. Networks operate as integrated entities—the whole is greater than the sum of its parts. Identifying and targeting the network and its functional components requires patience. A network will go to great lengths to protect its critical components. However, the interrelated nature of network functions means that an attack on one node may have a ripple effect as the network reconstitutes.

Whenever a network reorganizes or adapts, it can expose a larger portion of its members (nodes), relationships (links), and activities. Intelligence collection should be positioned to exploit any effects from the targeting effort, which in turn must be continuous and multi-nodal.

  1. The analytical products for threat networks support the decision of targets to be added to or removed from the target list and specifics for the employment of capabilities against a target. The staff should consider the following questions when selecting targets to engage within a threat network:

(1) Who or what to target? Network analysis provides the commander and staff with the information to prioritize potential targets. Depending on the effect desired for a network, the selected node for targeting may be a person, key resource, or other physical object that is critical in producing a specific effect on the network.

(2) What are the effects desired on the target and network? Understanding the conditions in the OE and the future conditions desired to achieve objectives supports a decision on what type of effects are desired on the target and the threat network as a whole. The desired effects on the threat network should be aligned with the commander’s intent that support objectives or conditions of the threat network to meet a desired end state.

(3) How will those desired effects be produced? The array of lethal and nonlethal capabilities may be employed with the decision to engage a target, whether directly or indirectly. In addition to the ability to employ conventional weapons systems, staffs must consider nonlethal capabilities that are available.

  1. Desired Effects on Networks
  2. Damage effects on an enemy or adversary from lethal fires are classified as light, moderate, or severe. Network engagement takes into consideration the effects of both lethal and nonlethal capabilities.
  3. When commanders decide to generate an effect on a network through engaging specific nodes, the intent may not be to cause damage, but to shape conditions of a mental or moral nature. The intended result of shaping these conditions is to support achieving the commander’s objectives. The desired effects selected are the result of the commander’s vision on the future conditions for the threat networks and within the OE to achieve objectives.

Terms that are used to describe the desired effects of CTN include:

(1) Neutralize. Neutralize is a tactical mission task that results in rendering enemy personnel or materiel incapable of interfering with a particular operation. The threat network’s structure exists to facilitate its ability to perform functions that support achieving its objectives. Neutralization of an entire network may not be feasible, but through analysis, the staff has the ability to identify key parts of the threat network’s structure to target that will result in the neutralization of specific functions that may interfere with a particular operation.

(2) Degrade. To degrade is to reduce the effectiveness or efficiency of a threat. The effectiveness of a threat network is associated with its ability to function as desired to achieve the threat’s objectives. Countering the effectiveness of a network may be accomplished by eliminating CRs that the network requires to facilitate an identified CC, identified through the application of CFA for the network.

(3) Disrupt. Disrupt is a tactical mission task in which a commander integrates direct and indirect fires, terrain, and obstacles to upset an enemy’s formation or tempo, interrupt the enemy’s timetable, or cause enemy forces to commit prematurely or attack in a piecemeal fashion. From the perspective of disrupting a threat network, the staff should consider the type of operation being conducted, specific functions of the threat network, and conditions within the OE that can be leveraged and potential application of both lethal and nonlethal capabilities. Additionally, the staff should consider the potential impact and duration of time that disrupting the threat network will present in opportunities for friendly forces to exploit a potential opportunity. Should the disruption result in the elimination of key nodes from the network, the staff must also consider the network’s means and time necessary to reconstitute.

(4) Destroy. Destroy is a tactical mission task that physically renders an enemy force combat ineffective until it is reconstituted. Alternatively, to destroy a combat system is to damage it so badly that it cannot perform any function or be restored to a usable condition without being entirely rebuilt. Destroying a threat network that is adaptive and transnationally established is an extreme challenge that requires the full collaboration of DOD and intergovernmental agencies, as well as coordination with partnered nations. Isolated destruction of cells may be more plausible and could be accomplished with the comprehensive application of lethal and nonlethal capabilities. Detailed analysis of the cell is necessary to establish a baseline (pre-operation conditions) in order to assess if operations have resulted in the destruction of the selected portion of a network.

(5) Defeat. Defeat is a tactical mission task that occurs when a threat network or enemy force has temporarily or permanently lost the physical means or the will to fight. The defeated force’s commander or leader is unwilling or unable to pursue that individual’s adopted COA, thereby yielding to the friendly commander’s will, and can no longer interfere to a significant degree with the actions of friendly forces. Defeat can result from the use of force or the threat of its use. Defeat manifests itself in some sort of physical action, such as mass surrenders, abandonment of positions, equipment and supplies, or retrograde operations. A commander or leader can create different effects against an enemy to defeat that force.

(6) Deny. Deny is an action to hinder or deny the enemy the use of territory, personnel, or facilities to include destruction, removal, contamination, or erection of obstructions. An example of deny is to destroy the threat’s communications equipment as a means of denying his use of the electromagnetic spectrum. However, the duration of denial will depend on the enemy’s ability to reconstitute.

(7) Divert. To divert is to turn aside or from a path or COA. A diversion is the act of drawing the attention and forces of a threat from the point of the principal operation; an attack, alarm, or feint diverts attention. Diversion causes threat networks or enemy forces to consume resources or capabilities critical to threat operations in a way that is advantageous to friendly operations. Diversions draw the attention of threat networks or enemy forces away from critical friendly operations and prevent threat forces and their support resources from being employed for their intended purpose.

  1. Engagement Strategies
  2. Counter Resource. A counter-resource approach can progressively weaken the threat’s ability to conduct operations in the OE and require the network to seek a suitable substitute to replace eliminated or constrained resources. Like a military organization, a threat’s network or a threat’s organization is more than its C2 structure. It must have an assured supply of recruits, food, weapons, and transportation to maintain its position and grow. While the leadership provides guidance to the network, it is the financial and logistical infrastructure that sustains the network. Most threat networks are transnational in nature, drawing financial support, material support, and recruits from a worldwide audience.
  3. Decapitation. Decapitation is the removal of key nodes within the network that are functioning as leaders. Targeting leadership is designed to impact the C2 of the network. Detailed analysis of the network may provide the staff with an indication of how long the network will require to replace leadership once they are removed from the network. From a historical perspective, the removal of a single leader from an adaptive human network has resulted in short-term effects on the network.

When targeting the nodes, links, and activities of threat networks, the JFC should consider the second- and third-order effects on friendly and neutral groups that share network and cell functions. Additionally, the ripple effects throughout the network and its cells should be considered.

  1. Fragmentation. A fragmentation strategy is the surgical removal of key nodes of the network that produces a fragmented effect on the network with the intent to disrupt the network’s ability to function. Although fragmenting the network will result in immediate effects, the staff must consider when this type of strategy is appropriate. Elimination of nodes within the network may have impacts on collection efforts, depending on the node being targeted.
  2. Counter-Messaging. Threat networks form around some type of catalyst that motivates individuals from a receptive audience to join a network. The challenging aspect of a catalyst is that individuals will interpret and relate to it in their own manner. There may be some trends among members of the network that relate to the catalyst in a similar manner; this perspective is not accurate for all members of the network. Threat networks have embraced the ability to project their own messages using a number of social media sites. These messages support their objectives and are used as a recruiting tool for new members. Countering the threat network’s messages is one aspect of countering a threat network.
  3. Targeting
  4. At the tactical level, the focus is on executing operations targeting nodes and links. Accurate, timely, and relevant intelligence supports this effort. Tactical units use this intelligence along with their procedures to conduct further analysis, template, and target networks.
  5. Targeting of threat network CVs is driven by the situation, the accuracy of intelligence, and the ability of the joint force to quickly execute various targeting options to create the desired effects. In COIN operations, high-priority targets may be individuals who perform tasks that are vulnerable to detection/exploitation and impact more than one CR.

Timing is everything when attacking a network, as opportunities for attacking identified CVs may be limited.

  1. CTN targets can be characterized as targets that must be engaged immediately because of the significant threat they represent or the immediate impact they will make related to the JFC’s intent, key nodes such as high-value individuals, or longer-term network infrastructure targets (caches, supply routes, safe houses) that are normally left in place for a period of time to exploit them. Resources to service/exploit these targets are allocated in accordance with the JFC’s priorities, which are constantly reviewed and updated through the command’s joint targeting process.

(1) Dynamic Targeting. A time-sensitive targeting cell consisting of operations and intelligence personnel with direct access to engagement means and the authority to act on pre-approved targets is an essential part of any network targeting effort. Dynamic targeting facilitates the engagement of targets that have been identified too late or not selected in time to be included in deliberate targeting and that meet criteria specific to achieving the stated objectives.

(2) Deliberate Targeting. The joint fires cell is tasked to look at an extended timeline for threats and the overall working of threat networks. With this type of deliberate investigation into threat networks, the cell can identify catalysts to the threat network’s operations and sustainment that had not traditionally been targeted on a large scale.

  1. The joint targeting cycle supports the development and prosecution of threat networks. Land and maritime force commanders normally use an interrelated process to enhance joint fire support planning and interface with the joint targeting cycle known as the decide, detect, deliver, and assess (D3A) methodology. D3A incorporates the same fundamental functions of the joint targeting cycle as the find, fix, track, target, engage, and assess (F2T2EA) process and functions within phase 5 of the joint targeting cycle. The D3A methodology facilitates synchronizing maneuver, intelligence, and fire support. The F2T2EA and F3EAD methodologies support dynamic targeting. While the F3EAD model was developed for personality-based targeting, it can only be applied once the JFC has approved the joint integrated prioritized target list. Depending on the situation, multiple methodologies may be required to create the desired effect.
  2. F3EAD. F3EAD facilitates the targeting not only of individuals when timing is crucial, but also more importantly the generation of follow-on targets through timely exploitation and analysis. F3EAD facilitates synergy between operations and intelligence as it refines the targeting process. It is a continuous cycle in which intelligence and operations feed and support each other. It assists to:

(1) Analyze the threat network’s ideology, methodology, and capabilities; helps template its inner workings: personnel, organization, and activities.

(2) Identify the links between enemy CCs and CRs and observable indicators of enemy action.

(3) Focus and prioritize dedicated intelligence collection assets.

(4) Provide the resulting intelligence and products to elements capable of rapidly conducting multiple, near-simultaneous attacks against the CVs.

(5) Provide an ability to visualize the OE and array and synchronize forces and capabilities.

  1. The F3EAD process is optimized to facilitate targeting of key nodes and links tier I (enemy top-level leadership, for example) and tier II (enemy intermediaries who interact with the leaders and establish links with facilitators within the population). Tier III individuals (the low-skilled foot soldiers who are part of the threat) may be easy to reach and provide an immediate result but are a distraction to success because they are easy to replace and their elimination is only a temporary inconvenience to the enemy. F3EAD can be used for any network function that is a time-sensitive target.
  2. The F3EAD process relies on the close coordination between operational planners and intelligence collection and tactical execution. Tactical forces should be augmented by a wide array of specialists to facilitate on-site exploitation and possible follow-on operations. Exploitation of captured materials and personnel will normally involve functional specialists from higher and even national resources. The goal is to quickly conduct exploitation and facilitate follow-on targeting of the network’s critical nodes.
  3. Targeting Considerations
  4. There is no hard-and-fast rule for allocating network targets by echelon. The primary consideration is how to create the desired effect against the network as a whole.

Generally network targets fall into one of three categories: individual targets, group targets, and organizational targets.

  1. Anobjectiveofnetworktargetingmaybetodenythethreatitsfreedomofactionand maneuver by maintaining constant pressure through unpredictable actions against the network’s leadership and critical functional nodes. It is based on selecting the right means or combination thereof to neutralize the target while minimizing collateral effects.
  2. While material targets can be disabled, denied, destroyed, or captured, humans and their interrelationships or links are open to a broader range of engagement options by friendly forces. For example, when the objective is to neutralize the influence of a specific group, it may require a combination of tasks to create the desired effect.
  3. Lines of Effort by Phase
  4. Targeting is a continuous and evolving process. As the threat adjusts to joint force activities, joint force intelligence collection and targeting must also adjust. Employing a counter-resource (logistical, financial, and recruiting) approach should increase the amount of time it will take for the organization to regroup. It may also force the threat to employ its hidden resources to fill the gaps, thus increasing the risk of detection and exploitation. During each phase of an operation or campaign against a threat network, there are specific actions that the JFC can take to facilitate countering threats network (see Figure V-6). However, these actions are not unique to any particular phase, and must be adapted to the specific requirements of the mission and the OE. The simplified model in Figure V-6 is illustrative rather than a list of specific planning steps.
  5. During phase 0, analysis provides a broad description of the structure of the underlying threat organization, identifies the critical functions and nodes, and identifies the relationships between the threat’s activities and the greater society.

These forces provide a foundation of information about the region to include very specific information that falls into the categories of PMESII. Actions against the network may include targeting of the threat’s transnational resources (money, supply, safe havens, recruiting); identifying key leadership; providing resources to facilitate PNs and regional efforts; shaping international and national populations’ opinions of friendly, neutral, and threat groups; and isolating the threat from transnational allies.

  1. During phase I, CTN activities seek to provide a more complete picture of the conditions in the OE. Forces already employed in theater may be leveraged as sources of information to help build a more detailed picture. New objectives may emerge as part of phase I, and forces deployed to help achieve those objectives contribute to the developing common operational picture. A network analysis is conducted to identify a target array that will keep the threat network off balance through multi-nodal attack operations.
  2. During phase II, CTN activities concentrate on developing previously identified targets, position intelligence collection to exploit effects, and continue to refine the description of the threat and its supporting network.
  3. During phase III, CTN activities are characterized by increased physical contact and a sizable ramp-up in a variety of intelligence and information collection assets. The focus is on identifying, exploiting, and targeting the clandestine core of the network. Intelligence collection assets and specialized analytical capabilities provide around the clock support to committed forces. Actions against the network continue and feature a ramp-up in resource denial; key leaders and activities are targeted for elimination; and constant multi-nodal pressure is maintained. Activities continue to convince neutral networks of the benefits of supporting the government and dissuade threat sympathizers from providing continued support to threat networks. Ultimately, the network is isolated from support and its ability to conduct operations is severely diminished.
  4. During phase IV, CTN activities focus on identifying, exploiting, and targeting the clandestine core of the network for elimination. Intelligence collection assets and specialized analytical capabilities continue to provide support to committed forces; the goal is to prevent the threat from recovering and regrouping.
  5. During phase V, CTN activities continue to identify, exploit, and target the clandestine core of the network for elimination and to identify the threat network’s attempts to regroup and reestablish control.
  6. Theater Concerns in Countering Threat Networks
  7. Many threat networks are transnational, recruiting, financing, and operating on a global basis. These organizations cooperate on a global basis when necessary to further their respective goals.
  8. In developing their CCMD campaign plans, CCDRs need to be aware of the complex relationships that characterize networks and leverage whole-of-government resources to identify and analyze networks to include their relationships with or part of known friendly, neutral, or threat networks. Militaries are interested in the activities of criminal organizations because these organizations provide material support to insurgent and terrorist organizations that also conduct criminal activities (e.g., kidnapping, smuggling, extortion). By tracking criminal organizations, the military may identify linkages (material and financial) to the threat network, which in turn might become a target.
  9. Countering Threat Networks Through Military Operations and Activities

Some threat networks may prefer to avoid direct confrontation with law enforcement and military forces. Activities associated with military operations at any level of conflict can have a direct or indirect impact on threats and their supporting networks.

  1. Operational Approaches to Countering Threat Networks
  2. There are many ways to integrate CTN into the overall plan. In some operations, the threat network will be the primary focus of the operation. In others, a balanced approach through multiple LOOs and LOEs may be necessary, ensuring that civilian concerns are met while protecting them from the threat networks’ operators.

In all CTN activities, lethal actions directed against the network should also be combined with nonlethal actions to support the legitimate government and persuade neutrals to reject the adversary.

 

  1. Effective CTN takes a deep understanding of the interrelationships between all the networks within an operational area, determining the desired effect(s) against each network, and nodes, and gathering and leveraging all available resources and capabilities to execute operations.

A CHANGING ENVIRONMENT—THE CONVERGENCE OF THREAT NETWORKS

Transnational organized crime penetration of states is deepening, leading to co-option of government officers in some nations and weakening of governance in many others. Transnational organized crime networks insinuate themselves into the political process through bribery and in some cases have become alternate providers of governance, security, and livelihoods to win popular support.

In fiscal year 2010, 29 of the 63 top drug trafficking organizations identified by the Department of Justice had links to terrorist organizations. While many terrorist links to transnational organized crime are opportunistic, this nexus is dangerous, especially if it leads a transnational organized crime network to facilitate the transfer of weapons of mass destruction transportation of nefarious actors or materials into the US.

CHAPTER VI

ASSESSMENTS

Commanders and their staffs will conduct assessments to determine the impact CTN activities may have on the targeted networks. Other networks, including friendly and neutral networks, within the OE must also be considered during planning, operations, and assessments.

Threat networks will adapt visibly and invisibly even as collection, analysis, and assessments are being conducted, which is why assessments over time that show trends are much more valuable in CTN activities than a single snapshot over a short time frame.

  1. Complex Operational Environments

Complex geopolitical environments, difficult causal associations, and the challenge of both quantitative and qualitative analysis to support decision making all complicate the assessment process. When only partially visible threat networks are spread over large geographic areas, among the people, and are woven into friendly and neutral networks, assessing the effects of joint force operations requires as much operational art as the planning process.

  1. Assessment of Operations to Counter Threat Networks
  2. CTN assessments at the strategic, operational, and tactical levels and across the instruments of national power are vital since many networks have regional and international linkages as well as capabilities. Objectives must be developed during the planning process so that progress toward objectives can be assessed.

Dynamic interaction among friendly, threat, and neutral networks makes assessing many aspects of CTN activities difficult. As planners assess complex human behaviors, they draw on multiple sources across the OE, including analytical and subjective measures, which support an informed assessment.

  1. Real-time network change detection is extremely challenging, and conclusions with high levels of confidence are rare. Since threat networks are rapidly adaptable, technological

systems used to support collection often struggle at monitoring change. Additionally, the large amounts of information collected require resources (people) and time for analysis. It is difficult to determine how networks change, and even more challenging to determine whether network changes are the result of joint force actions and, if so, which actions or combined actions are effective. A helpful indicator used in assessment comes when threat networks leverage social networks to coordinate and conduct operations, as it provides an opportunity to gain a greater understanding of the motivation and ideology of these networks. If intelligence analysts can tap into near real-time information from threat network entities, then that information can often be geospatially fused to create a better assessment. This is dependent on having access to accurate network data, the ability to analyze the data quickly, and the ability to detect deception.

  1. CTN assessments require staffs to conduct analysis more intuitively and consider both anecdotal and circumstantial evidence. Since networked threats operate among civilian populations, there is a greater need for HUMINT. Collection of HUMINT is time-consuming and reliability of sources can be problematic, but if employed properly and cross-cued with other disciplines, it is extremely valuable in irregular warfare. Tactical unit reporting such as patrol debriefs and unit after action reports when assimilated across an OE may provide the most valuable information on assessing the impact of operations.

OSINT will often be more valuable in assessing operations against threat networks and be the single greatest source of intelligence.

  1. Operation Assessment
  2. Theassessmentprocessisacontinuouscyclethatseekstoobserveandevaluatethe ever-changing OE and inform decisions about the future, making operations more effective. Base-lining is critical in phase 0 and the initial JIPOE process for assessments to be effective.

Assessments feed back into the JIPOE process to maintain tempo in the commander’s decision cycle. This is a continuous process, and the baseline resets for each cycle. Change is constant within the complex OE and when operating against multiple, adaptive, interconnected threat networks.

  1. Commanders establish priorities for assessment through their planning guidance, commander’s critical information requirements (CCIRs), and decision points. Priority intelligence requirements, a component of CCIR, detail exactly what data the intelligence collection plan should be seeking to inform the commander regarding threat networks.

CTN activities may require assessing multiple MOEs and measures of performance (MOPs), depending on threat network activity. As an example, JFCs may choose to neutralize or disrupt one type of network while conducting direct operations against another network to destroy it.

  1. Assessment precedes and guides every operation process activity and concludes each operation or phase of an operation. Like any cycle, assessment is continuous. The assessment process is not an end unto itself; it exists to inform the commander and improve the operation’s progress
  2. Integrated successfully, assessment in CTN activities will:

(1) Depict progress toward achieving the commander’s objectives and attaining the commander’s end state.

(2) Help in understanding how the OE is changing due to the impact of CTN activities on threat network structures and functions.

(3) Informthecommander’sdecisionmakingforoperationaldesignandplanning, prioritization, resource allocation, and execution.

(4) Produce actionable recommendations that inform the commander where to devote resources along the most effective LOOs and LOEs.

  1. Assessment Framework for Countering Threat Networks

The assessment framework broadly outlines three primary activities: organize, analyze, and communicate.

Multi-Service Tactics, Techniques, and Procedures for Operation Assessment

  1. Organize the Data

(1) Based on the OE and the operation plan or campaign plan, the commander and staff develop objectives and assessment criteria to determine progress. The organize activity includes ensuring the indicators are included within the collection plan, information collected and then analyzed by the intelligence section is organized using an information management plan, and that information is readily available to the staff to conduct the assessment. Multiple threat networks within an OE may require multiple MOPs, MOEs, metrics, and branches to the plan. Threat networks operating collaboratively or against each other complicate the assessment process. If threat networks conduct operations or draw resources from outside the operational area, there will be a greater reliance on other CCDRs or interagency partners for data and information.

Within the context of countering threat networks, example objective, measures of effectiveness (MOEs), and indicators could be:

Objective: Threat network resupply operations in “specific geographic area” are disrupted.

MOE: Suppliers to threat networks cease providing support. Indicator 1: Fewer trucks leaving supply depots.

Indicator 2: Guerrillas/terrorists change the number of engagements or length of engagement times to conserve resources.

Indicator 3: Increased threat network raids on sites containing resources they require (grocery stores, lumber yards, etc.)

(2) Metrics must be collectable, relevant, measurable, timely, and complementary. The process uses assessment criteria to evaluate task performance at all levels of warfare to determine progress of operations toward achieving objectives. Both qualitative and quantitative analyses are required. With threat networks, direct impacts alone may not be enough, requiring indirect impacts for a holistic assessment. Operations against a network’s financial resources may be best judged by analyzing the quality of equipment that they are able to deploy in the OE.

  1. Analyze the Data

(1) Analyzing data is the heart of the assessment process for CTN activities. Baselining is critical to support analysis. Baselining should not only be rooted in the initial JIPOE, but should go back to GCC theater intelligence collection and shaping operations. Understanding how threat networks formed and adapted prior to joint force operations provides assessors a significantly better baseline and assists in developing indicators.

(2) Data analysis seeks to answer essential questions:

(a) What happened to the threat network(s) as a result of joint force operations? Specific examples may include the following: How have links changed? How have nodes been affected? How have relationships changed? What was the impact on structure and functions? Specifically, what was the impact on operations, logistics, recruiting, financing, and propaganda?

(b) What operations caused this effect directly or indirectly? (Why did it happen?) It is likely that multiple instruments of national power efforts across several LOOs and LOEs impacted the threat network(s), and it is equally unlikely that a direct cause and effect is discernible.

Analysts must be aware of the danger of searching for a trend that may not be evident. Events may sometimes have dramatic effects on threat networks, but not be visible to outside/foreign/US observers.

(c) Whatarethelikelyfutureopportunitiestocounterthethreatnetworkand what are the risks to neutral and friendly networks? CTN activities should target CVs. Interdiction operations, for example, may create future opportunities to disrupt finances. Cyberspace operations may target Internet propaganda and create opportunities to reduce the appeal of threat networks to neutral populations.

(d) What needs to be done to apply pressure at multiple points across the instruments of national power (diplomatic, informational, military, and economic) to the targeted threat networks to attain the JFC’s desired military end state?

(3) Military units find stability tasks to be the most challenging to analyze since they are conducted among a civilian population. Adding a social dynamic complicates use of mathematical and deterministic formulas when human nature and social interactions play a major part in the OE. Overlaps between threat networks and neutral networks, such as the civilian population, complicate assessments and the second- and third-order effects analysis.

(4) The proximate cause of effects in complex situations can be difficult to determine. Even direct effects in these situations can be more difficult to create, predict, and measure, particularly when they relate to moral and cognitive issues (such as religion and the “mind of the adversary,” respectively). Indirect effects in these situations often are difficult to foresee. Indirect effects often can be unintended and undesired since there will always be gaps in our understanding of the OE. Unpredictable third-party actions, unintended consequences of friendly operations, subordinate initiative and creativity, and the fog and friction of conflict will contribute to an uncertain OE. Simply determining undesired effects on threat networks requires a greater degree of critical thinking and qualitative analysis than traditional operations. Undesired effects on neutral and friendly networks cannot be ignored.

(5) Statistical analysis is necessary and allows large volumes of data to be analyzed, but critical thinking must precede its use and qualitative analysis must accompany any conclusions. SNA is a form of statistical analysis on human networks that has proven to be a particularly valuable tool in understanding network dynamics and in showing network changes over time but it must be complemented by other types of analysis and traditional intelligence analysis. It can support the JIPOE process as well as the planning, targeting, and assessment processes. SNA requires significant data collection and since threat networks are difficult to collect on and may adapt unseen, it must be used in conjunction with other tools.

  1. Communicate the Assessment

(1) The assessment of CTN activities is only valuable to the commander and other participants if it is effectively communicated in a format that allows for rapid changes to LOOs/LOEs and operational and tactical actions for CTN activities.

(2) Communicating the CTN assessment clearly and concisely with sufficient information to support the staff’s recommendations, but not too much trivial detail, is challenging.

(3) Well-designed CTN assessment products show changes in indicators describing the OE and the performance of organizations as it related to CTN activities.

 

APPENDIX A

DEPARTMENT OF DEFENSE COUNTER THREAT FINANCE 1. Introduction

  1. JFCs face adaptive networked threats that rapidly adjust their operations to offset friendly force advantages and pose a wide array of challenges across the range of military operations.

CTN activities are a focused approach to understanding and operating against adaptive network threats such as terrorism, insurgency and organized crime. CTF refers to the activities and actions taken by the JFC to deny, disrupt, destroy, or defeat the generation, storage, movement, and use of assets to fund activities that support a threat network’s ability to negatively affect the JFC’s ability to attain the desired end state. Disrupting threat network finances decreases the threat network’s ability to achieve their objectives. That can range from sophisticated communications systems to support international propaganda programs, to structures to facilitate obtaining funding from foreign based sources, to foreign based cell support, to more local objectives to pay, train, arm, feed, and equip fighters. Disrupting threat network finances decreases their ability to conduct operations that threaten US personnel, interests, and national security.

  1. CTF activities against threat networks should be conducted with an understanding of the OE, in support of the JFC’s objectives, and nested with other counter threat network operations, actions, and activities. CTF activities cause the threat network to adjust its financial operations by disrupting or degrading its methods, routes, movement, and source of revenue. Understanding that financial elements are present at all levels of a threat network, CTF activities should be considered when developing MOEs during planning with the intent of forecasting potential secondary and tertiary effects.
  2. Effective CTF operations depend on developing an understanding of the functional organization of the threat network, the threat network’s financial capabilities, methods of operation, methods of communication, and operational areas, and upon detecting how revenue is raised, moved, stored, and used.
  3. Key Elements of Threat Finance
  4. Threatfinanceisthemannerinwhichadversarialgroupsraise,move,store,anduse funds to support their activities. Following the money and analyzing threat finance networks is important to:

(1) Identify facilitators and gatekeepers.
(2) Estimate threat networks’ scope of funding.
(3) Identify modus operandi.
(4) Understand the links between financial networks.
(5) Determine geographic movement and location of financial networks.

(6) Capture and prosecute members of threat networks.

  1. Raising Money. Fund-raising through licit and illicit channels is the first step in being able to carry out or support operations. This includes raising funds to pay for such mundane items as food, lodging, transportation, training, and propaganda. Raising money can involve network activity across local and international levels. It is useful to look at each source of funding as separate nodes that fit into a much larger financial network. That network will have licit and illicit components.

(1) Funds can be raised through illicit means, such as drug and human trafficking, arms trading, smuggling, kidnapping, robbery, and arson.

(2) Alternatively, funds can be raised through ostensibly legal channels. Threat networks can receive funds from legitimate humanitarian and business organizations and individual donations.

(3) Legitimate funds are coming led with illicit funds destined for threat networks, making it extremely difficult for governments to track threat finances in the formal financial system. Such transactions are perfectly legal until they can be linked to a criminal or terrorist act. Therefore, these transactions are extremely hard to detect in the absence of other indicators or through the identification of the persons involved.

  1. Moving Money. Moving money is one of the most vulnerable aspects of the threat finance process. To make the illicit money usable to threat networks it must be laundered. This can be done through the use of front companies, legitimate businesses, cash couriers, or third parties that may be willing to take on the risks in exchange for a cut of the profits. These steps are called “placement” and “layering.”

(1) During the placement stage, the acquired funds or assets are placed into a local, national, or international financial system for future use. This is necessary if the generated funds or assets are not in a form useable by their recipient, e.g., converting cash to wire transfers or checks.

(2) During the layering stage, numerous transactions are conducted with the assets or proceeds to create distance between the origination of the funds or assets and their eventual destination. Distance is created by moving money through several accounts, businesses or people, or by repeatedly converting the money or asset into a different form.

  1. Storing Money. Money or goods that have successfully been moved to a location that can be accessed by the threat network may need to be stored until it is ready to be spent.
  2. Using Money. Once a threat network has raised, moved, and stored their money, they are able to spend it. This is called “integration.” Roughly half of the money that was initially raised will go to operational expenses and the cost of laundering the money to convert it to useable funds. During integration, the funds or assets are placed at the disposal of the threat network for their utilization or re-investment into other licit and illicit operations.
  3. Planning Considerations
  4. CTF requires the integration of the efforts of disparate organizations in a whole-of- government approach in a complex environment. Joint operation/campaign plans and operation orders should be crafted to recognize that the core competencies of various agencies and military activities are coordinated and resources integrated, when and where appropriate, with those of others to achieve the operational objectives.
  5. The JFC and staff need to understand the impact that changes to the OE will have on CTF activities. The adaptive nature of threat networks will force changes to the network’s business practices and operations based on the actions of friendly networks within the OE. This understanding can lead to the creation of a more comprehensive, feasible, and achievable plan.
  6. CTF planning will identify the organizations and entities that will be required to conduct CTF action and activities.
  7. Intelligence Support Requirements
  8. CTF activities require detailed, timely, and accurate intelligence of threat networks’ financial activities to inform planning and decision making. Analysts can present the JFC with a reasonably accurate scope of the threat network’s financial capabilities and impact probabilities if they have a thorough understanding of the threat network’s financial requirements and what the threat network is doing to meet those requirements.
  9. JFCs should identify intelligence requirements for threat finance-related activities to establish collection priorities prior to the onset of operations.
  10. Intelligence support can focus on following the money by tracking the generation, storage, movement, and use of funds, which may provide additional insight into threat network leadership activities and other critical components of the threat network’s financial business practices. Trusted individuals or facilitators within the network often handle the management of financial resources. These individuals and their activities may lead to the identification of CVs within the network and decisive points for the JFC to target the network.
  11. Operation
  12. DOD may not always be the lead agency for CTF. Frequently the efforts and products of CTF analysis will be used to support criminal investigations or regulatory sanction activities, either by the USG or one of its partners. This can prove advantageous as contributions from other components can expand and enhance an understanding of threat financial networks. Threat finance activities can have global reach and are generally not geographically constrained. At times much of the threat finance network, including potentially key nodes, may extend beyond the JFC’s operational area.
  13. Military support to CTF is not a distinct type of military operation; rather it represents military activities against a specific network capability of business and financial processes used by an adversary network.

(1) Major Operations. CTF can reduce or eliminate the adversary’s operational capability by reducing or eliminating their ability to pay troops and procure weapons, supplies, intelligence, recruitment, and propaganda capabilities.

(2) Arms Control and Disarmament. CTF can be used to disrupt the financing of trafficking in small arms, IED or WMD proliferation and procurement, research to develop more lethal or destructive weapons, hiring technical expertise, or providing physical and operational security.

(6) DOD Support to CD Operations. The US military may conduct training of PN/HN security and law enforcement forces, assist in the gathering of intelligence, and participate in the targeting and interception of drug shipments. Disrupting the flow of drug profits via C

(7) Enforcement of Sanctions. CTF encompasses all forms of value transfer to the adversary, not just currency. DOD organizations can provide assistance to organizations that are interdicting the movement of goods and/or any associated value remittance as a means to enforce sanctions.

(8) COIN. CTF can be used to counter, disrupt, or interdict the flow of value to an insurgency. Additionally, CTF can be used against corruption, as well as drug and other criminal revenue-generating activities that fund or fuel insurgencies and undermine the legitimacy of the HN government. In such cases, CTF is aimed at insurgent organizations as well as other malevolent actors in the environment.

(9) Peace Operations. In peace operations, CTF can be used to stem the flow of external sources of support to conflicts to contain and reduce the conflict.

  1. Military support tasks to CTF can fall into four broad categories:

(1) Support civil agency and HN activities (including law enforcement):

(a) Provide Protection. US military forces may provide overwatch for law enforcement or PN/HN military CTF activities.

(b) Provide Logistics. US military forces may provide transportation, especially tactical movement-to-objective support, to law enforcement or PN/HN military CTF activities.

(c) Provide Command, Control, and Communications Support. US military forces may provide information technology and communications support to civilian agencies or PN/HN CTF personnel. This support may include provision of hardware and software, encryption, bandwidth, configuration support, networking, and account administration and cybersecurity.

(2) Direct military actions:

(a) Capture/Kill. US military forces may, with the support of mission partners as necessary, conduct operations to capture or kill key members of the threat finance network.

(b) Interdiction of Value Transfers. US military forces may, with the support of mission partners, conduct operations to interdict value transfers to the threat network as necessary. This may be a raid to seize cash from an adversary safe house, foreign exchange house, hawala or other type of informal remittance systems; seizure of electronic media including mobile banking systems commonly known as “red sims” and computer systems that contain data support payment and communication data in the form of cryptocurrency or exchanges in the virtual environment; interdiction to stop the smuggling of

goods used in trade-based money laundering; or command and control flights to provide aerial surveillance of drug-smuggling aircraft in support of law enforcement interdiction.

(c) Training HN/PN Forces. US military forces may provide training to PN/HN CTF personnel under specific authorities.

(3) Intelligence Collection. US military forces may conduct all-source intelligence operations, which will deal primarily with the collection, exploitation, analysis, and reporting of CTF information. These operations may involve deploying intelligence personnel to collect HUMINT and the operations of ships at sea and forces ashore to collect SIGINT, OSINT, and GEOINT.

(4) Operations to Generate Information and Intelligence. Occasionally, US military forces may conduct operations either with SOF or conventional forces designed to provoke a response by the adversary’s threat finance network for the purpose of collecting information or intelligence on that network. These operations are pre-planned and carefully coordinated with the intelligence community to ensure the synchronization and posture of the collection assets as well as the operational forces.

  1. Threat Finance Cells

(1) Threatfinancecellscanbeestablishedatanylevelbasedonavailablepersonnel resources. Expertise on adversary financial activities can be provided through the creation of threat finance cells at brigade headquarters and higher. The threat finance cell would include a mix of analysts and subject matter experts on law enforcement, regulatory matters, and financial institutions that would be drawn from DOD and civil USG agency resources. The threat finance cell’s responsibilities vary by echelon. At division and brigade, the threat finance cell:

(a) Provides threat finance expertise and advice to the commander and staff.

(b) Assiststheintelligencestaffinthedevelopmentofintelligencecollection priorities focused on adversary financial and support systems that terminate in the unit’s operational area.

(c) Consolidatesinformationonpersonsprovidingdirectorindirectfinancial, material and logistics support to adversary organizations in the unit’s operational area.

(d) Provides information concerning adversary exploitation of US resources such as transportation, logistical, and construction contractors working in support of US facilities; exploitation of NGO resources; and exploitation of supporting HN personnel.

(e) Identifies adversary organizations coordinating or cooperating with local criminals, organized crime, or drug trafficking organizations.

(f) Providesassessmentsoftheadversary’sfinancialviability¾abilitytofund, maintain, and grow operations¾and the implications for friendly operations.

(g) Developstargetingpackagerecommendationsforadversaryfinancialand logistics support persons for engagement by lethal and nonlethal means.

(h) Notifies commanders when there are changes in the financial or support operations of the adversary organization, which could indicate changes in adversary operating tempo or support capability.

(i) Coordinatesandsharesinformationwithotherthreatfinancecellstobuilda comprehensive picture of the adversary’s financial activities.

(2) At the operational level, the joint force J-2 develops and maintains an understanding of the OE, which includes economic and financial aspects. If established, the threat finance cell supports the J-2 to develop and maintain an understanding of the economic and financial environment of the HN and surrounding countries to assist in the detection and tracking of illicit financial activities, understanding where financial support is coming from, how that support is being moved into the area of operation and how that financial support is being used. The threat finance cell:

(a) Works with the J-2 to develop threat finance-related priority intelligence requirements and establish threat finance all-source intelligence collection priorities. The threat finance cell assists the J-2 in the detection, identification, tracking, analysis, and targeting of adversary personnel and networks associated with financial support across the operational area.

(b) The threat finance cell coordinates with tactical and theater threat finance cells and shares information with those entities as well as multinational forces, HN, and as appropriate and in coordination with the joint force J-2, the intelligence community.

(c) The threat finance cell, in coordination with the J-2, establishes a financial network picture for all known adversary organizations in the operational area; establishes individual portfolios or target packages for persons identified as providing financial or material support to the adversary’s organizations in the operational area; identifies adversary financial TTP for fund-raising, transfer mechanisms, distribution, management and control, and disbursements; and identifies and distributes information on fund-raising methods that are being used by specific groups in the area of operations. The threat finance cell can also:

  1. Identify specific financial institutions that are involved with or that are providing financial support to the adversary and how those institutions are being exploited by the adversary.
  2. Provide CTF expertise on smuggling and cross border financial and logistics activities.
  3. Establish and maintain information on adversary operating budgets in the area of operation to include revenue streams, operating costs, and potential additions, or depletions, to strategic or operational reserves.
  4. Targets identified by the operational-level threat finance cell are shared with the tactical threat finance cells. This allows the tactical threat finance cells to support and coordinate tactical units to act as an action arm for targets identified by the operational-level CTF organization, and coordinate tactical intelligence assets and sources against adversary organizations identified by the operational-level CTF organization.
  5. Multi-echelon information sharing is critical to unraveling the complexities of an adversary’s financial infrastructure. Operational-level CTF organizations require the detailed financial intelligence that is typically obtained by resources controlled by the tactical organizations.
  6. The operational-level threat finance cell facilitates the provision of support by USG and multinational organizations at the tactical level. This is especially true for USG department and agencies that have representation at the American Embassy.

(3) Tactical-level threat finance cells will require support from the operational level to obtain HN political support to deal with negative influencers that can only be influenced or removed by national-level political leaders, including governors, deputy governors, district leads, agency leadership, chiefs of police, shura leaders, elected officials and other persons serving in official positions; HN security forces; civilian institutions; and even NGOs/charities that may be providing the adversary with financial and logistical support.

(4) The threat finance cell should be integrated into the battle rhythm. Battle rhythm events should follow the following criteria:

(a) Name of board or cell: Descriptive and unique.

(b) Lead staff section: Who receives, compiles, and delivers information.

(c) When/where does it meet in battle rhythm: Allocation of resources (time and facilities), and any collaborative tool requirements.

(d) Purpose: Brief description of the requirement.

(e) Inputs required from: Staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams required to provide products (once approved, these become specified tasks).

(f) When? Suspense for inputs.

(g) Output/process/product: Products and links to other staff sections, centers, groups, cells, offices, elements, boards, working groups, and planning teams.

(h) Time of delivery: When outputs will be available.

(i) Membership: Who has to attend (task to staff to provide participants and representatives).

 

  1. Assessment
  2. JFCs should know the importance and use of CTF capabilities within the context of measurable results for countering adversaries and should embed this knowledge within their staff. By assessing common elements found in adversaries’ financial operations, such as composition, disposition, strength, personnel, tactics, and logistics, JFCs can gain an understanding of what they might encounter while executing an operation and identify vulnerabilities of the adversary. Preparing a consolidated, whole-of-government set of metrics for threat finance will be extremely challenging.
  3. Metricsonthreatfinancemayappeartobeoflittlevaluebecauseitisverydifficult to obtain fast results or intelligence that can is immediately actionable. Actions against financial networks may take months to prepare, organize, and implement, due to the difficulty of collecting relevant detailed information and the time lags associated with processing, analysis, and reporting findings on threat financial networks.
  4. The JFC’s staff should assess the adversary’s behaviors based on the JFC’s desired end state and determine whether the adversary’s behavior is moving closer to that end state.
  5. The JFC and staff should consult with participating agencies and nations to establish a set of metrics which are appropriate to the mission or LOOs assigned to the CTF organization.

APPENDIX B

THE CONVERGENCE OF ILLICIT NETWORKS

  1. The convergence of illicit networks (e.g., criminals, terrorists, and insurgents) incorporates the state or degree to which two or more organizations, elements, or individuals approach or interrelate. Conflict in Iraq and Afghanistan has seen a substantial increase in the cooperative arrangements of illicit networks to further their respective interests. From the Taliban renting their forces out to provide security for drug operations to al-Qaida using criminal organizations to smuggle resources, temporary cooperative arrangements are now a routine aspect of CTN operations.
  2. The US intelligence community has concluded that transnational organized crime has grown significantly in size, scope, and influence in recent years. A public summary of the assessment identified a convergence of terrorist, criminal, and insurgent networks as one of five key threats to US national security. Terrorists and insurgents increasingly have and will continue to turn to crime to generate funding and will acquire logistical support from criminals, in part because of successes by USG departments and agencies and PNs in attacking other sources of their funding, such as from state sponsors. In some instances, terrorists and insurgents prefer to conduct criminal activities themselves; when they cannot do so, they turn to outside individuals and facilitators. Some criminal organizations have adopted terrorist organizations’ practice of extreme and widespread violence in an overt effort to intimidate governments and populations at various levels.
  3. To counter threat networks, it is imperative to understand the converging nature of the relationship among terrorist groups, insurgencies, and transnational criminal organizations. The proliferation of these illicit networks and their activities globally threaten US national security interests. Together, these groups not only destabilize environments through violence, but also become dominant actors in shadow economies, distorting market forces. Indications are that although the operations and objectives of criminal groups, insurgents, and terrorists differ, these groups interact on a regular basis for mutually beneficial reasons. They each pose threats to state sovereignty. They share the common goals of ensuring that poorly governed and post-conflict countries have ineffective laws and law enforcement, porous borders, a culture of corruption, and lucrative criminal opportunities.

Organized crime has been traditionally treated as a law enforcement rather than national security concern. The convergence of organized criminal networks with the other non-state actors requires a more sophisticated, interactive, and comprehensive response that takes into account the dynamics of the relationships and adapts to the shifting tactics employed by the various threat networks.

  1. Mounting evidence suggests that the modus operandi of these entities often diverges and the interactions among them are on the rise. This spectrum of convergence (Figure B-1) has received increasing attention in law enforcement and national security policy-making circles. Until recently, the prevalent view was that terrorists and insurgents were clearly distinguishable from organized criminal groups by their motivations and the methods used to achieve their objectives. Terrorist and insurgent groups use or threaten to use extreme violence to attain political ends, while organized criminal groups are primarily motivated by profit. Today, these distinctions are no longer useful for developing effective diplomatic, law enforcement, and military strategies, simply because the lines between them have become blurred, and the security issues have become intertwined.

The convergence of organized criminal networks and other illicit non-state actors, whether for short-term tactical partnerships or broader strategic imperatives, requires a much more sophisticated response or unified approach, one that takes into account the evolving nature of the relationships as well as the environmental conditions that draw them together.

  1. The convergence of illicit networks has provided law enforcement agencies with a broader mandate to combat terrorism. Labeling terrorists as criminals undermines the reputation of terrorists as freedom fighters with principles and a clear political ideology, thereby hindering their ability to recruit members or raise funds.

just as redefining terrorists as criminals damages their reputation, ironically it might prove to be useful at other times to redefine criminals as terrorists, such as in the case of the Haqqani network in Afghanistan. For instance, this change in term might make additional resources available to law enforcement agencies, such as those of the military or the intelligence services, thereby making law enforcement more effective.

  1. However, there are some limitations associated with the latter approach. The adage that a terrorist to one is a freedom fighter to another holds true. This difference of opinion therefore renders it difficult for states to cooperate in joint CT operations.
  2. The paradigm of fighting terrorism, insurgency, and transnational crime separately, utilizing distinct sets of authorities, tools, and methods, is not adequate to meet the challenges posed by the convergence of these networks into a criminal-terrorist-insurgency conglomeration. While the US has maintained substantial long-standing efforts to combat terrorism and transnational crime separately, the government has been challenged to evaluate whether the existing array of authorities, responsibilities, programs, and resources sufficiently responds to the combined criminal-terrorism threat. Common foreign policy options have centered on diplomacy, foreign assistance, financial actions, intelligence, military action, and investigations. At issue is how to conceptualize this complex illicit networks phenomenon and oversee the implementation of cross-cutting activities that span geographic regions, functional disciplines, and a multitude of policy tools that are largely dependent on effective interagency coordination and international cooperation.
  3. Terrorist Organizations
  4. Terrorism is the unlawful use of violence or threat of violence, often motivated by religious, political, or other ideological beliefs, to instill fear and coerce governments or societies in pursuit of goals that are usually political.
  5. In addition to increasing law enforcement capabilities for CT, the US, like many nations, has developed specialized, but limited, military CT capabilities. CT actions are activities and operations taken to neutralize terrorists and their organizations and networks to render them incapable of using violence to instill fear and coerce governments or societies to achieve their goals.
  6. Insurgencies
  7. Insurgency is the organized use of subversion and violence to seize, nullify, or challenge political control of a region. Insurgency uses a mixture of subversion, sabotage, political, economic, psychological actions, and armed conflict to achieve its political aims. It is a protracted politico-military struggle designed to weaken the control and legitimacy of an established government, a military occupation government, an interim civil administration, or a peace process while increasing insurgent control and legitimacy.
  8. COIN is a comprehensive civilian and military effort designed to simultaneously defeat and contain insurgency and address its root causes. COIN is primarily a political struggle and incorporates a wide range of activities by the HN government, of which security is only one element, albeit an important one. Unified action is required to successfully conduct COIN operations and should include all HN, US, and multinational partners.
  9. Of the groups designated as FTOs by DOS, the vast majority possess the characteristics of an insurgency: an element of the larger group is conducting insurgent type operations, or the group is providing assistance in the form of funding, training, or fighters to another insurgency. Colombia’s government and the Revolutionary Armed Forces of Colombia reached an agreement to enter into peace negotiations in 2012, taking another big step toward ending the 50-year old insurgency.
  10. The convergence of illicit networks contributes to the undermining of the fabric of society. Since the proper response to this kind of challenge is effective civil institutions, including uncorrupted and effective police, the US must be capable of deliberately applying unified action across all instruments of national power in assisting allies and PNs when asked.
  11. Transnational Criminal Organizations
  12. From the National Security Strategy, combating transnational criminal and trafficking networks requires a multidimensional strategy that safeguards citizens, breaks the financial strength of criminal and terrorist networks, disrupts illicit trafficking networks, defeats transnational criminal organizations, fights government corruption, strengthens the rule of law, bolsters judicial systems, and improves transparency.
  13. Transnational criminal organizations are self-perpetuating associations of individuals that operate to obtain power, influence, monetary and/or commercial gains, wholly or in part by illegal means. These organizations protect their activities through a pattern of corruption and/or violence or protect their illegal activities through a transnational organizational structure and the exploitation of transnational commerce or communication mechanisms.

Transnational criminal networks are not only expanding operations, but they are also diversifying activities, creating a convergence of threats that has become more complex, volatile, and destabilizing. These networks also threaten US interests by forging alliances with corrupt elements of national governments and using the power and influence of those elements to further their criminal activities. In some cases, national governments exploit these relationships to further their interests to the detriment of the US.

  1. The convergence of illicit networks continues to grow as global sanctions affect the ability of terrorist organizations and insurgencies to raise funds to conduct their operations.
  2. Although drug trafficking still represents the most lucrative illicit activity in the world, other criminal activity, particularly human and arms trafficking, have also expanded. As a consequence, international criminal organizations have gone global; drug trafficking organizations linked to the Revolutionary Armed Forces of Colombia, for example, have agents in West Africa
  3. As the power and influence of these organizations has grown, their ability to undermine, corrode, and destabilize governments has increased. The links forged between these criminal groups, terrorist movements, and insurgencies have resulted in a new type of threat: ever-evolving networks that exploit permissive OEs and the seams and gaps in policy and application of unified action to conduct their criminal, violent, and politically motivated activities. Threat networks adapt their structures and activities faster than countries can combat their illicit activities. In some instances, illicit networks are now running criminalized states.

 

Drawing the necessary distinctions and differentiations [between coexistence, cooperation, and convergence] allows the necessary planning to begin in order to deal with the matter, not only in the Sahel, but across the globe:

By knowing your enemies, you can find out what it is they want. Once you know what they want, you can decide whether to deny it to them and thereby demonstrate the futility of their tactics, give it to them, or negotiate and give them a part of it in order to cause them to end their campaign. By knowing your enemies, you can make an assessment not just of their motives but also their capabilities and of the caliber of their leaders and their organizations.

It is often said that knowledge is power. However, in isolation knowledge does not enable us to understand the problem or situation. Situational awareness and analysis is required for comprehension, while comprehension and judgment is required for understanding. It is this understanding that equips decision makers with the insight and foresight required to make effective decisions.

Extract from Alda, E., and Sala, J. L., Links Between Terrorism, Organized Crime and Crime: The Case of the Sahel Region. Stability: International Journal of Security and Development, 10 September 2014.

 

APPENDIX C

COUNTERING THREAT NETWORKS IN THE MARITIME DOMAIN

  1. Overview

The maritime domain connects a myriad of geographically dispersed nodes of friendly, neutral, and threat networks, and serves as the primary conduit for nearly all global commerce. The immense size, dynamic environments, and legal complexities of this domain create significant challenges to establishing effective maritime governance in many regions of the world.

APPENDIX D

IDENTITY ACTIVITIES SUPPORT TO COUNTERING THREAT NETWORK OPERATIONS

  1. Identity activities are a collection of functions and actions that recognize and differentiate one person from another to support decision making. Identity activities include the collection of identity attributes and physical materials and their processing and exploitation.
  2. Identity attributes are the biometric, biographical, behavioral, and reputational data collected during encounters with an individual and across all intelligence disciplines that can be used alone or with other data to identify an individual. The processing and analysis of these identity attributes results in the identification of individuals, groups, networks, or populations of interest, and facilitates the development of I2 products that allow an operational commander to:

(1) Identify previously unknown threat identities.

(2) Positively link identity information, with a high degree of certainty, to a specific human actor.

(3) Reveal the actor’s pattern of life and connect the actor to other persons, places, materials, or events.

(4) Characterize the actor’s associates’ potential level of threat to US interests.

  1. I2 fuses identity attributes and other information and intelligence associated with those attributes collected across all disciplines. I2 and DOD law enforcement criminal intelligence products are crucial to commanders’, staffs’, and components’ ability to identify and select specific threat individuals as targets, associate them with the means to create desired effects, and support the JFC’s operational objectives.
  2. Identity Activities Considerations
  3. Identity activities leverage enabling intelligence activities to help identify threat actors by connecting individuals to other persons, places, events, or materials, analyzing patterns of life, and characterizing capability and intent to harm US interests.
  4. The joint force J-2 is normally responsible for production of I2 within the CCMD.

(1) I2 products are normally developed through the JIPOE process and provide detailed information about threat activity identities in the OE. All-source analysis, coupled with identity information, significantly enhances understanding of the location of threat actors and provides detailed information about threat activity and potential high-threat areas within the OE. I2 products enable improved force protection, targeted operations, enhanced intelligence collection, and coordinated planning.

  1. Development of I2 requires coordination throughout the USG and PNs, and may necessitate an intelligence federation agreement. During crises, joint forces may also garner support from the intelligence community through intelligence federation.
  2. Identity Activities at the Strategic, Operational, and Tactical Levels
  3. At the strategic level, identity activities are dependent on interagency and PN information and intelligence sharing, collaboration, and decentralized approaches to gain identity information and intelligence, provide analyses, and support the vetting the status (friendly, adversary, neutral, or unknown) of individuals outside the JFC’s area of operations who could have an impact on the JFC’s missions and objectives.
  4. At the operational level, identity activities employ collaborative and decentralized approaches blending technical capabilities and analytic abilities to provide identification and vetting of individuals within the AOR.
  5. At the tactical level, identity information obtained via identity activities continues to support the unveiling of anonymities. Collection and analysis of identity-related data helps tactical commanders further understand the OE and to decide on the appropriate COAs with regards to individual(s) operating within it; as an example, identity information often forms the basis for targeting packages. In major combat operations, I2 products help provide the identities of individuals moving about the operational area who are conducting direct attacks on combat forces, providing intelligence for the enemy, and/or disrupting logistic operations.
  6. US Special Operations Command and partners currently deploy land-based exploitation analysis centers to rapidly process and exploit biometric data, documents, electronic media, and other material to support I2 operations and gain greater situational awareness of threats.
  7. Policy and Legal Considerations for Identity Activities Support to Countering Threat Networks
  8. The authorities to collect, store, share, and use identity data will vary depending upon the AOR and the PNs involved in the CTN activities. Different countries have strict legal restrictions on the collection and use of personally identifiable information, and the JFC may need separate bilateral and/or multinational agreements to alleviate partners’ privacy concerns.
  9. Socio-cultural considerations also may vary depending upon the AOR. In some cultures, for example, a female subject’s biometric data may need to be collected by a female. In other cultures, facial photography may be the preferred biometric collection methodology so as not to cross sociocultural boundaries.
  10. Evidence-based operations and support to rule of law for providing identity data to HN law enforcement and judicial systems should be considered.

The prosecution of individuals, networks, and criminals relies on identity data. However, prior to providing identity data to HN law enforcement and judicial systems, one should consult with their staff judge advocate or legal advisor.

APPENDIX E

EXPLOITATION IN SUPPORT OF COUNTERING THREAT NETWORKS 1. Exploitation and the Joint Force

  1. Oneofthemajorchallengesconfrontingthejointforceistheaccurateidentification of the threat network’s key personnel, critical functions, and sources of supply. Threat networks often go to extraordinary lengths to protect critical information about the identity of their members and the physical signatures of their operations. These networks leave behind an extraordinary amount of potentially useful information in the form of equipment, documents, and even materials recovered from captured personnel. This information can lead to a deeper understanding of the threat network’s nodes, links, and functions and assists in continuous analysis and mapping of the network. If the friendly force has the ability to collect and analyze the materials found in the OE, then they can gain the insights needed to cause significant damage to the threat network’s operations. Exploitation provides a means to match individuals to events, places, devices, weapons, related paraphernalia, or contraband as part of a network attack.
  2. Conflicts in Iraq and Afghanistan have witnessed a paradigm shift in how the US military’s intelligence community supports the immediate intelligence needs of the deployed force and the type of information that can be derived from analysis of equipment, materials, documents, and personnel encountered on the battlefield. To meet the challenges posed by threat networks in an irregular warfare environment, the US military formed a deployable, multidisciplinary exploitation capability designed to provide immediate feedback on the tactical and operational relevance of threat equipment, materials, documents, and personnel encountered by the force. This expeditionary capability is modular, scalable, and includes collection, technical, and forensic exploitation and analytical capabilities linked to the national labs and the intelligence enterprise.
  3. Exploitation is accomplished through a combination of forward deployed and reachback resources to support the commander’s operational requirements.
  4. Exploitation employs a wide array of enabling capabilities and interagency resources, from forward deployed experts to small cells or teams providing scientific or technical support, or interagency or partner laboratories, and centers of excellence providing real-time support via reachback. Exploitation activities require detailed planning, flexible execution, and continuous assessment. Exploitation is designed to provide:

(1) Support to targeting, which occurs as a result of technical and forensic exploitation of recovered materials used to identify participants in the activity and provide organizational insights that are targetable.

(2) Support to component and material sourcing and tracking and supply chain interdiction uses exploitation techniques to determine origin, design, construction methods, components, and pre-cursors of threat weapons to identify where the materials originated, the activities of the threat’s logistical networks, and the local supply sources.

(3) Support to prosecution is accomplished when the results of the exploitation link individuals to illicit activities. When supporting law enforcement activities, recovered materials are handled with a chain of custody that tracks materials through the progressive stages of exploitation. The materials can be used to support detainment and prosecution of captured insurgents or to associate suspected perpetrators who are connected later with a hostile act.

(4) Support to force protection including identifying threat TTP and weapons’ capabilities that defeat friendly countermeasures, including jamming devices and armor.

(5) Identification of signature characteristics derived from threat weapon fabrication and employment methods that can aid in cuing collection assets.

  1. Tactical exploitation delivers preliminary assessments and information about the weapons employed and the people who employed them

Operational-level exploitation can be conducted by deployed labs and provides detailed forensic and technical analysis of captured materials. When combined with all-source intelligence reporting, it supports detailed analysis of threat networks to inform subsequent targeting activities. In an irregular warfare environment, where the mission and time permit, commanders should routinely employ forensics-trained collection capabilities (explosive ordnance disposal [EOD] unit, weapons intelligence team [WIT], etc.) in their overall ground operations to take advantage of battlefield opportunities.

(1) Tactical exploitation begins at the point of collection. The point of collection includes turnover of material from HN government or civilian personnel, material and information discovered during a maritime interception operation, cache discovery, raid, IED incident, post-blast site, etc.

(2) Operational-level exploitation employs technical and forensic examination techniques of collected data and material and is conducted by highly trained examiners in expeditionary or reachback exploitation facilities.

  1. Strategic exploitation is designed to inform theater- and national-level decision makers. A commander’s strategic exploitation assets may include forward deployed or reachback joint captured materiel exploitation centers and labs capable of conducting formally accredited and/or highly sophisticated exploitation techniques. These assets can respond to theater strategic intelligence requirements and, when very specialized capabilities are leveraged, provide support to national requirements.

Strategic exploitation is designed to support national strategy and policy development. Strategic requirements usually involve targeting of high-value or high-priority actors, force protection design improvement programs, and source interdiction programs designed to deny the adversary externally furnished resources.

  1. Exploitation activities are designed to provide a progressively detailed multidisciplinary analysis of materials recovered from the OE. From the initial tactical evaluation at the point of collection, to the operational forward deployed technical/forensic field laboratory and subsequent evaluation, the enterprise is designed to provide a timely, multidisciplinary analysis to support decision making at all echelons. Exploitation capabilities vary in scope and complexity, span peacetime to wartime activities, and can be applied during all military operations.
  2. Collection and Exploitation
  3. An integrated and synchronized effort to detect, collect, process, and analyze information, materials, or people and disseminate the resulting facts provides the JFC with information or actionable intelligence.

Collection also includes the documentation of contextual information and material observed at the incident site or objective. All the activities vital to collection and exploitation are relevant to identity activities as many of the operations and efforts are capable of providing identity attributes used for developing I2 products.

(1) Site Exploitation. The JFC may employ hasty or deliberate site exploitation during operations to recognize, collect, process, preserve, and analyze information, personnel, and/or material found during the conduct of operations. Based on the type of operation, commanders and staffs assess the probability that forces will encounter a site capable of yielding information or intelligence and plan for the integration of various capabilities to conduct site exploitation.

(2) Expeditionary Exploitation Capabilities. Operational-level expeditionary labs are the focal point for the theater’s exploitation and analysis activities that provide the commander with the time-sensitive information needed to shape the OE.

(a) Technical Exploitation. Technical exploitation includes electronic and mechanical examination and analysis of collected material. This process provides information regarding weapon design, material, and suitability of mechanical and electronic components of explosive devices, improvised weapons, and associated components.

  1. Electronic Exploitation. Electronic exploitation at the operational level is limited and may require strategic-level exploitation available at reachback labs or forward deployed labs.
  2. Mechanical Exploitation. Mechanical exploitation of material (mechanical components of conventional and improvised weapons and their associated platforms) focuses on devices incorporating manual mechanisms: combinations of physical parts that transmit forces, motion, or energy.

(b) Forensic Exploitation. Forensic exploitation applies scientific techniques to link people with locations, events, and material that aid the development of targeting, interrogation, and HN/PN prosecution support.

(c) DOMEX. DOMEX consists of three exploitation techniques: document exploitation, cellular exploitation, and media exploitation. Documents, cell phones, and media recovered during collection activities, when properly processed and exploited, provide valuable information, such as adversary plans and intentions, force locations, equipment capabilities, and logistical status. Exploitable materials include paper documents such as maps, sketches, letters, cell phones, smart phones, and digitally recorded media such as hard drives and thumb drives.

  1. Supporting the Intelligence Process
  2. Within their operational areas, commanders are concerned with identifying the members of and systematically targeting the threat network, addressing threats to force protection, denying the threat network access to resources, and supporting the rule of law. Information derived from exploitation can provide specific information and actionable intelligence to address these concerns. Exploitation reporting provides specific information to help answer the CCIRs. Exploitation analysis is also used to inform the intelligence process by identifying specific individuals, locations, and activities that are of interest to the commander
  3. Exploitation products may inform follow-on intelligence collection and analysis activities. Exploitation products can facilitate a more refined analysis of the threat network’s likely activities and, when conducted during shape and deter phases, typically enabled by HN, interagency and/or international partners, can help identify threats and likely countermeasures in advance of any combat operations.
  4. Exploitation Organization and Planning
  5. A wide variety of Service and national exploitation resources and capabilities are available to support forward deployed forces. These deployable resources are generally scalable and can make extensive use of reachback to provide analytical support. The JIPOE product will serve as a basis for determining the size and mix of capabilities that will be required to support initial operations.
  6. J-2E. During the planning process, the JFC should consider the need for exploitation support to help fulfill the requirements for information about the OE, identify potential threats to US forces, and understand the capabilities and capacity of the adversary network.

The J-2E (when organized) establishes policies and procedures for the coordination and synchronization of the exploitation of captured threat materials. The J-2E will:

(1) Evaluate and establish the commander’s collection and exploitation requirements for deployed laboratory systems or material evacuation procedures based on the mission, its object and duration, threat faced, military geographic factors, and authorities granted to collect and process captured material.

(2) Ensure broad discoverability, accessibility, and usability of exploitation information at all levels to support force protection, targeting, material sourcing, signature characterization of enemy activities, and the provision of materials collected, transported, and accounted for with the fidelity necessary to support prosecution of captured insurgents or terrorists.

(3) Prepare collection plans for a subordinate exploitation task force responsible for finding and recovering battlefield materials.

(4) Provide direction to forces to ensure that the initial site collection and exploitation activities are conducted to meet the commanders’ requirements and address critical information and intelligence gaps.

(5) Ensure that exploitation enablers are integrated and synchronized at all levels and their activities support collection on behalf of the commander’s priority intelligence requirements. Planning includes actions to:

(a) Identify units and responsibilities.

(b) Ensure exploitation requirements are included in the collection plan.

(c) Define priorities and standard operating procedures for materiel recovery and exploitation.

(d) Coordinate transportation for materiel.

(e) Establish technical intelligence points of contact at all levels to expedite dissemination.

(f) Identify required augmentation skill sets and additional enablers.

  1. Exploitation Task Force

(1) As an alternative to using the JFC’s staff to manage exploitation activities, the JFC can establish an exploitation task force, integrating tactical-level and operational-level organizations and streamlining communications under a single headquarters whose total focus is on the exploitation effort. The task force construct is useful when a large number of exploitation assets have been deployed to support large-scale, long-duration operations. The organization and employment of the task force will depend on the mission, the threat, and the available enabling forces.

The combination of collection assets with specialized exploitation enablers allows the task force to conduct focused threat network analysis and targeting, provide direct support packages of exploitation enablers to higher headquarters, and organize and conduct unit-level training programs.

(a) Site Exploitation Teams. These units are task-organized teams specifically detailed and trained at the tactical level. The mission of site exploitation teams is to conduct systematic discovery activities and search operations, and properly identify, document, and preserve the point of collection and its material.

(b) EOD Teams. EOD personnel have special training and equipment to render safe explosive ordnance and IEDs, make intelligence reports on such items or components, and supervise the safe removal thereof.

(c) WITs. WITs are task-organized teams, often with organic EOD support that exploit a site of intelligence value by collecting IED-related material, performing tactical questioning, collecting forensic materials, including latent fingerprints, preserving and documenting DOMEX, including cell phones and other electronic media, providing in-depth documentation of the site, including sketches and photographs, evaluating the effects of threat weapons systems, and preparing material for evacuation.

(d) CBRN Response Teams. When WMD or hazardous CBRN precursors may be present, CBRN response teams can be detailed to supervise the site exploitation. CBRN response team personnel are trained to properly recognize, preserve, neutralize, and collect hazardous CBRN or explosive materials.

(f) DOMEX. DOMEX support is scalable and ranges from a single liaison offer, utilizing reachback for full analysis, to a fully staffed joint document exploitation center for primary document exploitation.

APPENDIX F

THE CLANDESTINE CHARACTERISTICS OF THREAT NETWORKS 1. Introduction

  1. MaintainingregionalstabilitycontinuestoposeamajorchallengefortheUSandits PNs. The threat takes many forms from locally based to mutually supporting and regionally focused transnational criminal organizations, terrorist groups, and insurgencies that leverage global transportation and information networks to communicate and obtain and transfer resources (money, material, and personnel). In the long term, for the threat to win it must survive and to survive it must be organized and operate so that no one strike will cripple the organization. Today’s threat networks are characterized by flexible organizational structures, adaptable and dynamic operational capabilities, a highly nuanced understanding of the OE, and a clear vision of their long-term goals.
  2. While much has been made of the revolution brought about by technology and its impact on a threat network’s organization and operational methods, the impacts have been evolutionary rather than revolutionary. The threat network is well aware that information technology, while increasing the rate and volume of information exchange, has also increased the risk to clandestine operations due to the increase in electromagnetic and cyberspace signatures, which puts these types of communications at risk of detection by governments, like the USG, that can apply technological advantage to identify, monitor, track, and exploit these signatures.
  3. When it comes to designing a resilient and adaptable organizational structure, every successful threat network over time adopted the traditional clandestine cellular network architecture. This type of network architecture provides a means of survival in form through a cellular or compartmentalized structure; and in function through the use of clandestine arts or tradecraft to minimize the signature of the organization—all based on the logic that the primary concern is that the movement needs to survive to attain its political goals.
  4. When faced with a major threat or the loss of a key leader, clandestine cellular networks contain the damage and simply morph and adapt to new leaders, just as they morph and adapt to new terrain and OEs. In some cases the networks are degraded, in others they are strengthened, but in both cases, they continue to fight on, winning by not losing. It is this “logic” of clandestine cellular networks—winning by not losing—that ensures their survival.
  5. CTN activities that focus on high-value or highly connected individuals (organizational facilitators) may achieve short-term gains but the cellular nature of most threat networks allows them to quickly replace individual losses and contain the damage. Operations should isolate the threat network from the friendly or neutral populations, regularly deny them the resources required to operate, and eliminate leadership at all levels so friendly forces can deny them the freedom of movement and freedom of action the threat needs to survive.
  6. Principles of Clandestine Cellular Networks

The survival of clandestine portions of a threat network organization rests on six principles: compartmentalization, resilience, low signature, purposeful growth, operational risk, and organizational learning. These six principles can help friendly forces to analyze current network theories, doctrine, and clandestine adversaries to identify strengths and weaknesses.

  1. Compartmentalization comes both from form and function and protects the organization by reducing the number of individuals with direct knowledge of other members, plans, and operations. Compartmentalization provides the proverbial wall to counter friendly exploitation and intelligence-driven operations.
  2. Resilience comes from organizational form and functional compartmentalization and not only minimizes damage due to counter network strikes on the network, but also provides a functional method for reconnecting the network around individuals (nodes) that have been killed or captured.
  3. Low signature is a functional component based on the application of clandestine art or tradecraft that minimizes the signature of communications, movement, inter-network interaction, and operations of the network.
  4. Purposeful growth highlights the fact that these types of networks do not grow in accordance with modern information network theories, but grow with purpose or aim: to gain access to a target, sanctuary, population, intelligence, or resources. Purposeful growth primarily relies on clandestine means of recruiting new members based on the overall purpose of the network, branch, or cell.
  5. Operational risk balances the acceptable risk for conducting operations to gain or maintain influence, relevance, or reach to attain the political goals and long-term survival of the movement. Operations increase the observable signature of the organization, threatening its survival. Clandestine cellular networks of the underground develop overt fighting forces (rural and urban) to interact with the population, the government, the international community, and third-party countries conducting FID in support of the government forces. This interaction invariably leads to increased observable signature and counter-network operations against the network’s overt elements. However, as long as the clandestine core is protected, these overt elements are considered expendable and quickly replaced.
  6. Organizational learning is the fundamental need to learn and adapt the clandestine cellular network to the current situation, the threat environment, overall organizational goals, relationships with external support mechanisms, the changing TTP of the counter network forces, new technologies, and the physical dimension, human factors, and cyberspace.
  7. Organization of Clandestine Cellular Networks
  8. Clandestine elements of an insurgency use form—organization and structure—for compartmentalization, relying on the basic network building block, the compartmented cell, from which the term cellular is derived. The cell size can differ significantly from one to any number of members, as well as the type of interaction within the cell, depending on the cell’s function. There are generally three basic functions—operations, intelligence, and support. The cell members may not know each other, such as in an intelligence cell, with the cell leader being the only connection between the other members. In more active operational cells, such as a direct-action cell, all the members are connected, know each other, perhaps are friends or are related, and conduct military-style operations that require large amounts of communications. Two or more cells linked to a common leader are referred to as branches of a larger network. For example, operational cells may be supported by an intelligence cell or logistics cell. Building upon the branch is the network, which is made up of multiple compartmentalized branches, generally following a pattern of intelligence (and counterintelligence) branches, operational branches (direct action or urban guerrilla cells), support branches (logistics and other operational enablers like propaganda support), and overt political branches or shadow governments
  9. The key concept for organizational form is compartmentalization of the clandestine cellular network (i.e., each element is isolated or separated from the others). Structural compartmentalization is in two forms: the cut-out, which is a method ensuring that opponents are unable to directly link two individuals together, and through lack of knowledge; no personal information is known about other cell members, so capture of one does not put the others at risk. In any cell where the members must interact directly, such as in an operational or support cell, the entire cell may be detained, but if the structural compartmentalization is sound, then the counter-network forces will not be able to exploit the cell to target other cells, the leaders of the branch, or overall network.
  10. The basic model for a cellular clandestine network consists of the underground, the auxiliary, and the fighters. The underground and auxiliary are the primary components that utilize clandestine cellular networks; the fighters are the more visible overt action arm of the insurgency (Figure F-2). The underground and auxiliary cannot be easily replaced, while the fighters can suffer devastating defeats (Fallujah in 2006) without threatening the existence of the organization.
  11. The underground is responsible for the overall command, control, communications, information, subversion, intelligence, and covert direct action operations, such as terrorism, sabotage, and intimidation. The original members and core of the threat network generally operate as members of the underground. The underground cadres develop the organization, ideally building it from the start as a clandestine cellular network to ensure its secrecy, low- signature, and survivability. The underground members operate as the overarching leaders, leaders of the organization cells, training cadres, and/or subject matter experts for specialized skills, such as propaganda, bomb making, or communications.
  12. The auxiliary is the clandestine support personnel, directed by the underground, which provide logistics, operational support, and intelligence collection of the underground and the fighters. The auxiliary members use their normal daily routines to provide them cover for their activities in support of the threat, to include freedom of movement to transport materials and personnel, specialized skills (electricians, doctors, engineers, etc.), or specialized capabilities for operations. These individuals may hold jobs such as local security forces, doctors and nurses, shipping and transportation specialists, and businesspeople that provide them with a reason for security forces to allow them freedom of movement even in a crisis.
  13. The fighters are the most visible and the most easily replaced members of the threat network. While their size and armament will vary, they use a more traditional hierarchical organizational structure. The fighters are normally used for the high-risk missions where casualties are expected and can be recovered from in short order.
  14. The Elements of a Clandestine Cellular Network
  15. A growing insurgency/terrorist/criminal movement is a complex undertaking that must be carefully managed if its critical functions are to be performed successfully. Using the clandestine cellular model, the organization’s leader and staff will manage a number of subordinate functional networks
  16. These functional networks will be organized into small cells, usually arranged so that only the cell leader knows the next connection in the organization. As the organization grows, the number of required interactions will increase, but the number of actively participating members in those multicellular interactions will remain limited. Unfortunately, the individual’s increased activity also increases the risk of detection.
  17. Clandestine cellular networks are largely decentralized for execution at the tactical level, but maintain a traditional or decentralized hierarchical form above the tactical level. The core leadership may be an individual, with numerous deputies, which can limit the success of decapitation strikes. Alternatively, the core leadership could be in the form of a centralized group of core individuals, which may act as a centralized committee. The core could also be a coordinating committee of like-minded threat leaders who coordinate their efforts, actions, and effects for an overall goal, while still maintaining their own agendas.
  18. To maintain a low signature necessary for survival, network leaders give maximum latitude for tactical decision making to cell leaders. This allows them to maintain tactical agility and freedom of action based on local conditions. The key consideration of the underground leader, with regard to risk versus maintaining influence, is to expose only the periphery tactical elements to direct contact with the counter-network forces.

LASTING SUCCESS

For the counter-network operator, the goal is to conduct activities that are designed to break the compartmentalization and facilitate the need for direct communication with members of other cells in the same branch or members of other networks. By maintaining pressure and leveraging the effects of a multi-nodal attack, friendly forces could potentially cause a catastrophic “cascading failure” and the disruption, neutralization, or destruction of multiple cells, branches, or even the entire network. Defeat of a network’s overt force is only a setback. Lasting success can only come with securing the relevant population, isolating the network from external support, and identifying and neutralizing the hard-core members of the network.

Various Sources

  1. Even with rigorous compartmentalization and internal discipline, there are structural weaknesses that can be detected and exploited. These structural points of weaknesses include the interaction between the underground and the auxiliary and between the auxiliary and the fighters and the interaction with external networks (transnational criminal, terrorist, other insurgents) who may not have the same level of compartmentalization.
  2. Network Descriptors
  3. Networks and cells can be described as open or closed. Understanding whether a network or cell is open or closed helps the intelligence analysts and planners to determine the scale, vulnerability, and purpose behind the network or cell. An open network is one that is growing purposefully, recruiting members to gain strength, access to targeted areas or support populations, or to replace losses. Given proper compartmentalization, open networks provide an extra security buffer for the core movement leaders by adding layers to the organization between the core and the periphery cells. Since the periphery cells on the outer edge of the network have higher signatures than the core, they draw the friendly force’s attention and are more readily identified by the friendly force, protecting the core.
  4. Closed cells or networks have limited or no growth, having been hand selected or directed to limit growth in order to minimize signature, chances of compromise, and to focus on a specific mission. While open networks are focused on purposeful growth, the opposite is true of the closed networks that are purposefully compartmentalized to a certain size based on their operational purpose. This is especially pertinent for use as terrorist cells, made up of generally closed, non-growing networks of specially selected or close-knit individuals. Closed networks have an advantage in operational security since the membership is fixed and consists of trusted individuals. Compartmentalizing a closed network protects the network from infiltration, but once penetrated, it can be defeated in detail.

APPENDIX G

SOCIAL NETWORK ANALYSIS

  1. In military operations, maps have always played an important role as an invaluable tool to better understanding the OE. Understanding the physical terrain is often secondary to understanding the people. Identifying and understanding the human factors is critical. The ability to map, visualize, and measure threat, friendly, and neutral networks to identify key nodes enables commanders at the strategic, operational, and tactical levels to better optimize solutions and develop the plan.
  2. Planners should understand the environment made up of human relationships and connections established by cultural, tribal, religious, and familial demographics and affiliations.
  3. By using advanced analytical methodologies such as SNA, analysts can map out, visualize, and understand the human factors.
  4. Social Network Analysis
  5. Overview

(1) SNA is a method that provides the identification of key nodes in the network based on four types of centrality (i.e., degree, closeness, betweenness, and eigenvector) using network diagrams. SNA focuses on the relationships (links or ties) between people, groups, or organizations (called nodes or actors). SNA does this by providing tools and quantitative measures that help to map out, visualize, and understand networks and the relationships between people (the human factors) and how those networks and relationships may be influenced.

Network diagrams, a graphical depiction of network analysis, used within SNA are referred to as sociograms that depict the social community structure as a network with ties between nodes (see Figure G-1). Like physical terrain maps of the earth, sociograms can have differing levels of detail.

(2) SNA provides a deeper understanding of the visualization of people within social networks and assists in ranking potential ability to influence or be influenced by those social networks. SNA provides an understanding of the organizational dynamics of a social network, which can be used for detailed analysis of a network to determine options on how to best influence, coerce, support, attack, or exploit them. In particular, it allows planners to identify and portray the details of a network structure, illuminate key players, highlight cohesive cells or subgroups within the network and identify individuals or groups that can or cannot be influenced, supported, manipulated, or coerced.

(3) SNA helps organize the informality of illusive and evolving networks. SNA techniques highlight the structure of a previously unobserved association by focusing on the preexisting relationships and ties that bind groups together. By focusing on roles, organizational positions, and prominent or influential actors, planners can analyze the structure of an organization, how the group functions, how members are influenced, how power is exerted, and how resources are exchanged. These factors allow the joint force to plan and execute operations that will result in desired effects on the targeted network.

(4) The physical, cultural, and social aspects of human factors involve complicated dynamics among people and organizations. These dynamics cannot be fully understood using traditional link analysis alone. SNA is distinguished from traditional, variable-based analysis that typically focuses on a person’s attributes such as gender, race, age, height, income, and religious affiliation.

While personal attributes remain fairly constant, social groups, affiliations or relationships constantly evolve. For example, a person can be a storeowner (business social network), a father (kinship social network), a member of the local government (political social network), a member of a church (religious social network), and be part of the insurgent underground (resistance social network). A person’s position in each social network matters more than their unchanging personal attributes. Their behavior in each respective network changes according to their role, influence, and authority in the network.

(1) Metrics. Analysts draw on a number of metrics and methods to better understand human networks. Common SNA metrics are broadly categorized into three metric families: network topology, actor centrality, and brokers and bridges.

(a) Network Diagram. Network topology is used to measure the overall network structure, such as its size, shape, density, cohesion, and levels of centralization and hierarchy (see Figure G-2). These types of measures can provide an understanding of a network’s ability to remain resilient and perform tasks efficiently. Network topology provides the planner with an understanding of how the network is organized and structured.

(b) Centrality. Indicators of centrality identify the key nodes within a network diagram, which may include identifying influential person(s) in a social network. Identification of the centrality helps identify key nodes in the network and illuminate potential leaders and can lead analysts to potential brokers within the network (see Figure G- 3). Centrality also measures and ranks people and organizations within a network based on how central they are to that network.

  1. Degree Centrality. The degree centrality of a node is based purely on the number of nodes it is linked to and the strength of those nodes. It is measured by a simple count of the number of direct links one node has to other nodes within the network. While this number is meaningless on its own, higher levels of degree centrality compared to other nodes may indicate an individual with a higher degree of power or influence within the network.

Nodes with a low degree of centrality (few direct links) are sometimes described as peripheral nodes (e.g., nodes I and J in Figure G-3). Although they have relatively low centrality scores, peripheral nodes can nevertheless play significant roles as resource gatherers or sources of fresh information from outside the main network.

  1. Closeness Centrality. Closeness centrality is the length of a node’s shortest path to any other node in the network. It is measured by a simple count of the number of links or steps from a node to the farther node away from it in the network, with the lowest numbers indicating nodes with the highest levels of closeness centrality. Nodes with a high level of closeness centrality have the closest association with every other node in the network. A high level of closeness centrality affords a node the best ability to directly or indirectly access the largest amount of nodes with the shortest path.

Closeness is calculated by adding the number of hops between a node and all others in a network

  1. Betweenness Centrality. Betweenness centrality is present when a node serves as the only connection between small clusters (e.g., cliques, cells) or individual nodes and the larger network. It is not measured by counting like degree and closeness centrality are; it is either present or not present (i.e., yes or no). Having betweenness centrality allows a node to monitor and control the exchanges between the smaller and larger networks that they connect, essentially acting as a broker for information between sections of the network.
  2. Eigen vector centrality measures the degree to which a node is linked to centralized nodes and is often a measure of the influence of a node in a network. It assumes that the greater number or stronger ties to more central or influential nodes increases the importance of a node. It essentially determines the “prestige” of a node based on how many other important nodes it is linked to. A node with a high eigenvector centrality is more closely linked to critical hubs.

(c) Brokers and Bridges. Brokerage metrics use a combination of methods to identify either nodes (brokers) that occupy strategic positions within the network or the relationships (bridges) connecting disparate parts of the network (see Figure G-4). Brokers have the potential to function as intermediaries or liaisons in a network and can control the flow of information or resources. Nodes that lie on the periphery of a network (displaying low centrality scores) are often connected to other networks that have not been mapped. This helps the planner identify gaps in their analysis and areas that still need mapping to gain a full understanding of the OE. These outer nodes provide an opportunity to gather fresh information not currently available.

  1. Density

Network density examines how well connected a network is by comparing the number of links present to the total number of links possible, which provides an understanding of how sparse or connected the network is. Network density can indicate many things. A dense network may have more influence than a sparse network. A highly interconnected network has fewer individual member constraints, may be less likely to rely on others as information brokers, be in a better position to participate in activities, or be closer to leadership, and therefore able to exert more influence upon them.

  1. Centralization. Centralization helps provide insights on whether the network is centralized around a few key personnel/organizations or decentralized among many cells or subgroups. A network centralized around one key person may further allow planners to focus in on these key personnel to influence the entire network.
  2. Density and centralization can inform whether an adversary force has a centralized hierarchy or command structure, if they are operating under a core C2 network with multiple, relatively autonomous hubs, or if they are a group of ad hoc decentralized resistance elements with very little interconnectedness or cohesive C2. Centralization metrics can also identify the most central people or organizations with the resistance.

Although hierarchical charts are helpful, they do not convey the underlying powerbrokers and key players that are influential with a social network and can often miss identifying the brokers that control the flow of information or resources throughout the network.

  1. Interrelationship of Networks

The JFC should identify the key stakeholders, key players, and power brokers in a potential operational area.

  1. People generally identify themselves as members of one or more cohesive networks. Networks may form due to common associations between individuals that may include tribes, sub-tribes, clans, family, religious affiliations, clubs, political organizations, and professional or hobby associations. SNA helps examine the individual networks that exist within the population that are critical to understanding the human dynamics in the OE based upon known relationships.
  2. Various networks within the OE are interrelated due to an individual’s association with multiple networks. SNA provides the staff with understanding of nodes within a single network, but can be expanded to conduct analysis on interrelated networks. This may provide the joint staff with an indication of the potential association, level of connectivity and potential influence of a single node to one more interrelated network. This aspect is essential for CTN, since a threat network’s relationship with other networks must be considered by the joint staff during planning and targeting.
  3. Other Considerations
  4. Collection. Two types of data need to be collected to conduct SNA: relational data (such as family/kinship ties, business ties, trust ties, financial ties, communication ties, grievance ties, political ties, etc.) and attribute data that captures important individual characteristics (tribe affiliations, job title, address, leadership positions, etc.). Collecting, updating, and verifying this information should be coordinated across the whole of USG.

(1) Ties (or links) are the relationship between actors (nodes) (see Figure G-5). By focusing on the preexisting relationships and ties that bind a group together, SNA will help provide an understanding of the structure of the network and help identify the unobserved associations of the actors within that network. To draw an accurate picture of a network, planners need to identify ties among its members. Strong bonds formed over time by connections like family, friendship, or organizational associations characterize these ties.

(2) Capturing the relational data of social ties between people and organizations requires collection, recording, and visualization. The joint force must collect specific types of data in a structured format with standardized data definitions across the force in order to visualize the human factors in systematic sociograms.

  1. Analysis

(1) Sociograms identify influential people and organizations as well as information gaps in order to prioritize collection efforts. The social structure depicted in a sociogram implies an inherent flow of information and resources through a network. Roles and positions identify prominent or influential individuals, structures of organizations, and how the networks function. Sociograms can model the human dynamics between participants in a network, highlight how to influence the network, identify who exhibits power within the network, and illustrate resource exchanges within the network. Sociograms can also provide a description and picture of the regime networks, or neutral entities, and uncover how the population is segmented.

(2) Sociograms are representations of the actual network and may not provide a complete or true depiction of the network. This could be the result of incomplete information or including or not including appropriate ties or actors. In addition, networks are constantly changing and a sociogram is only as good as the last time it was updated.

  1. Challenges. Collecting human factors data to support SNA requires a concerted effort over an extended period. Data can derive from traditional intelligence gathering capabilities, historical data, open-source information, exploiting social media, known relationships, and direct observation. This human factor data should be codified into a standardized data coding process defined by a standardized reference. Entering this human factor data is a process of identifying, extracting, and categorizing raw data to facilitate analysis. For analysts to ensure they are analyzing the sociocultural relational data collected in a standardized way, the JFC can produce a reference that provides standardized definitions of relational terms. Standardization will ensure that when analysts or planners exchange analytical products or data their analysis has the same meaning to all parties involved. This is needed to avoid confusion or misrepresentation in the data analysis. Standardized data definitions ensure consistency at all levels; facilitate data and analysis product transfer among differing organizations; and allow multiple organizations to produce interoperable products concurrently.

APPENDIX H

REFERENCES

The development of JP 3-25 is based on the following primary references:

  1. General
  2. Title 10, United States Code.
    b. Strategy to Combat Transnational Organized Crime.
    c. Executive Order 12333, United States Intelligence Activities.
  3. Department of Defense Publications
  4. Department of Defense Counternarcotics and Global Threats Strategy.
    b. Department of Defense Directive (DODD) 2000.19E, Joint Improvised Explosive

Device Defeat Organization.

  1. DODD 3300.03, DOD Document and Media Exploitation (DOMEX).
  1. DODD 5205.14, DOD Counter Threat Finance (CTF) Policy.
  2. DODD 5205.15E, DOD Forensic Enterprise (DFE).
  1. DODD 5240.01, DOD Intelligence Activities.
  1. DODD 8521.01E, Department of Defense Biometrics.
  2. Department of Defense Instruction (DODI) O-3300.04, Defense Biometric Enabled

Intelligence (BEI) and Forensic Enabled Intelligence (FEI).

  1. DODI5200.08, Security of DOD Installations and Resources and the DOD Physical Security Review Board (PSRB).
  2. Chairman of the Joint Chiefs of Staff Publications
  3. JP 2-01.3, Joint Intelligence Preparation of the Operational Environment. b. JP 3-05, Special Operations.
    c. JP 3-07.2, Antiterrorism.
    d. JP 3-07.3, Peace Operations.
  4. JP 3-07.4, Counterdrug Operations.
    f. JP 3-08, Interorganizational Cooperation.
  5. JP 3-13, Information Operations.
    h. JP 3-13.2, Military Information Support Operations.
    i. JP 3-15.1, Counter-Improvised Explosive Device Operations. j. JP 3-16, Multinational Operations.
    k. JP 3-20, Security Cooperation.
    l. JP 3-22, Foreign Internal Defense.
    m. JP 3-24, Counterinsurgency.
    n. JP 3-26, Counterterrorism.
    o. JP 3-40, Countering Weapons of Mass Destruction.
    p. JP 3-57, Civil-Military Operations.
    q. JP 3-60, Joint Targeting.
    r. JP 5-0, Joint Planning.
    s. Joint Doctrine Note 1-16, Identity Activities.
  6. Multi-Service Publication

ATP 5-0.3/MCRP 5-1C/NTTP 5-01.3/AFTTP 3-2.87, Multi-Service Tactics, Techniques, and Procedures for Operation Assessment.

  1. Other Publications
  2. The Haqqani Network: Pursuing Feuds Under the Guise of Jihad? CTX Journal, Vol. 3, No. 4, November 2013, Major Lars W. Lilleby, Norwegian Army.
  3. Foreign Disaster Response, Military Review, November-December 2011.
  4. US Military Response to the 2010 Haiti Earthquake, RAND Arroyo Center, 2013.
  5. DOD Support to Foreign Disaster Relief, July 13, 2011.
  6. United Nations Stabilization Mission in Haiti website.
  7. Kirk Meyer, Former Director of the Afghan Threat Finance Cell—CTX Journal, Vol. 4, No. 3, August 2014.
  8. Networks and Netwars: The Future of Terror[ism], Crime, and Militancy, Edited by John Arquilla, David Ronfeldt.
  9. General Martin Dempsey, Chairman of the Joint Chiefs of Staff, Foreign Policy,25 July 2014, The Bend of Power.
  10. Alda,E.,andSala,J.L.LinksBetweenTerrorism,OrganizedCrimeandCrime:The Case of the Sahel Region. Stability: International Journal of Security and Development, Vol. 3, No. 1, Article 27, pp. 1-9.
  11. International Maritime Bureau Piracy (Piracy Reporting Center).

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Notes on Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

Notes on Methods and Motives: Exploring Links between Transnational Organized Crime & International Terrorism

Authors: Dr. Louise I. Shelley, John T. Picarelli, Allison Irby, Douglas M. Hart, Patricia A. Craig-Hart, Dr. Phil Williams, Steven Simon, Nabi Abdullaev, Bartosz Stanislawski, Laura Covill

In preparation for the work on this report, we reviewed a significant body of academic research on the structure and behavior of organized crime and terrorist groups. By examining how other scholars have approached the issues of organized crime or terrorism, we were able to refine our methodology. This novel approach combines a framework drawn from intelligence analysis with the tenets of a methodological approach devised by the criminologist Donald Cressey, who uses the metaphor of an archeological dig to systematize a search for information on organized crime.7 All the data and examples used to populate the model have been verified, and our findings have been validated through the rigorous application of case study methods.

 

While experts broadly accept no single definition of organized crime, a review of the numerous definitions offered identifies several central themes.8 There is consensus that at least two perpetrators are in- volved, but there is a variety of views about the way organized crime is typically organized as a hierarchy or as a network.9

 

Organized crime is a continuing enterprise, so does not include conspiracies that perpetrate single crimes and then go their separate ways. Furthermore, the overarching goals of organized crime groups are profit and power. Groups seek a balance between maximizing profits and minimizing their own risk, while striving for control by menacing certain businesses. Violence, or the threat of violence, is used to enforce obligations and maintain hegemony over rackets and enterprises such as extortion and narcotics smuggling. Corruption is a means of reducing the criminals’ own risk, maintaining control and making profits.

few definitions challenge the common view of organized crime as a ‘parallel government’ that seeks power at the expense of the state but retains patriotic or nationalistic ties to the state. This report takes up that challenge by illustrating the rise of a new class of criminal groups with little or no national allegiance. These criminals are ready to pro- vide services for terrorists as has been observed in European prisons.10

We prefer the definition offered by the UN Convention Against Transnational Organized Crime, which defines an organized crime group as “a structured group [that is not randomly formed for the im- mediate commission of an offense] of three or more persons, existing for a period of time and acting in concert with the aim of committing one or more serious crimes or offences [punishable by a deprivation of liberty of at least four years] established in accordance with this Convention, in order to obtain, directly or indirectly, a financial or other material benefit.

we prefer the notion of a number of shadow economies, in the same way that macroeconomists use the global economy, comprising markets, sectors and national economies, as their basic unit of reference.

terrorism scholar Bruce Hoffman has offered a comprehensive and useful definition of terrorism as the deliberate creation and exploitation of fear through violence or the threat of violence in the pursuit of political change.15 Hoffman’s definition offers precise terms of reference while remaining comprehensive; he further notes that terrorism is ‘political in aims and motives,’ ‘violent,’ ‘designed to have far-reaching psychological repercussions beyond the immediate victim or target,’ and ‘conducted by an organization with an identifiable chain of command or conspiratorial cell structure.’ These elements include acts of terrorism by many different types of criminal groups, yet they clearly circumscribe the violent and other terrorist acts. Therefore, the Hoffman definition can be applied to both groups and activities, a crucial distinction for this methodology we propose in this report.

Early identification of terror-crime cooperation occurred in the 1980s and focused naturally on narcoterrorism, a phrase coined by Peru’s President Belaunde Terry to describe the terrorist attacks against anti-narcotics police in Peru.

the links between narcotics trafficking and terror groups exist in many regions of the world but that it is difficult to make generalizations about the terror- crime nexus.

 

International relations theorists have also produced a group of scholarly works that examine organized crime and terrorism (i.e., agents or processes) as objects of investigation for their paradigms. While in some cases, the frames of reference international relations scholars employed proved too general for the purposes of this report, the team found that these works demonstrated more environmental or behavioral aspects of the interaction.

2.3 Data collection

Much of the information in the report that follows was taken from open sources, including government reports, private and academic journal articles, court documents and media accounts.

To ensure accuracy in the collection of data, we adopted standards and methods to form criteria for accepting data from open sources. In order to improve accuracy and reduce bias, we attempted to corroborate every piece of data collected from one secondary source with data from a further source that was independent of the original source — that is, the second source did not quote the first source. Second, particularly when using media sources, we checked subsequent reporting by the same publication to find out whether the subject was described in the same way as before. Third, we sought a more heterogeneous data set by examining foreign-language documents from non-U.S. sources. We also obtained primary- source materials such as declassified intelligence reports from the Republic of Georgia, that helped to clarify and confirm the data found in secondary sources.

Since all these meetings were confidential, it was agreed in all cases that the information given was not for attribution by name.

For each of these studies, researchers traveled to the regions a number of times to collect information. Their work was combined with relevant secondary sources to produce detailed case studies presented later in the report. The format of the case studies followed the tenets outlined by Robert Yin, who proposes that case studies offer an advantage to researchers who present data illustrating complex relationships – such as the link between organized crime and terror.

 

2.4. Research goals

This project aimed to discover whether terrorist and organized crime groups would borrow one another’s methods, or cooperate, by what means, and how investigators and analysts could locate and assess crime-terror interactions. This led to an examination of why this overlap or interaction takes place. Are the benefits merely logistical or do both sides derive some long-term gains such as undermining the capacity of the state to detect and curtail their activities?

preparation of the investigative environment (PIE), by adapting a long-held military practice called intelligence preparation of the battlespace (IPB). The IPB method anticipates enemy locations and movements in order to obtain the best position for a commander’s limited battlefield resources and troops. The goal of PIE is similar to that of IPB—to provide investigators and analysts a strategic and discursive analytical method to identify areas ripe for locating terror and crime interactions, confirm their existence and then assess the ramifications of these collaborations. The PIE approach provides twelve watch points within which investigators and analysts can identify those areas most likely to contain crime-terror interactions.

The PIE methodology was designed with the investigator and analyst in mind, and thus PIE demonstrates how to establish investigations in a way that expend resources most fruitfully. The PIE methodology shows how insights can be gained from analysts to help practitioners identify problems and organize their investigations more effectively.

2.5. Research challenges

Our first challenge in investigating the links between organized crime and terrorism was to obtain enough data to provide an accurate portrayal of that relationship. Given the secrecy of all criminal organizations, many traditional methods of quantitative and qualitative research were not viable. Nonetheless we con- ducted numerous interviews, and obtained identified statements from investigators and policy officials. Records of legal proceedings, criminal records, and terrorist incident reports were also important data sources.

The strategy underlying the collection of data was to focus on the sources of interaction wherever they were located (e.g., developing countries and urban areas), rather than on instances of interaction in developed countries like the September 11th or the Madrid bombing investigations. In so doing, the project team hoped to avoid characterizing the problem “from out there.”

 

All three case studies high- light patterns of association that are particularly visible, frequent, and of lengthy duration. Because the conflict regions in the case studies also contribute to crime in the United States, our view was these models were needed to perceive patterns of association that are less visible in other environments. A further element in the selection of these regions was practical: in each one, researchers affiliated with the project had access to reliable sources with first-hand knowledge of the subject matter. Our hypothesis was that some of the most easy to detect relations would be in these societies that are so corrupted and with such limited enforcement that the phenomena might be more open for analysis and disclosure than in environments where this is more covert.

  1. A new analytical approach: PIE

Investigators seeking to detect a terrorist activity before an incident takes place are overwhelmed by data.

A counterterrorist analyst at the Central Intelligence Agency took this further, noting that the discovery of crime-terror interactions was often the accidental result of analysis on a specific terror group, and thus rarely was connected to the criminal patterns of other terror groups.

IPB is an attractive basis for analyzing the behavior of criminal and terrorist groups because it focuses on evidence about their operational behavior as well as the environment in which they operate. This evidence is plentiful: communications, financial transactions, organizational forms and behavioral patterns can all be analyzed using a form of IPB.

the project team has devised a methodology based on IPB, which we have termed preparation of the investigation environment, or PIE. We define PIE as a concept in which investigators and analysts organize existing data to identify areas of high potential for collaboration between terrorists and organized criminals in order to focus next on developing specific cases of crime-terror interaction—thereby generating further intelligence for the development of early warning on planned terrorist activity.

While IPB is chiefly a method of eliminating data that is not likely to be relevant, our PIE method also provides positive indicators about where relevant evidence should be sought.

3.1 The theoretical basis for the PIE Method

Donald Cressey’s famous study of organized crime in the U.S., with the analogy of an archeological dig, was the starting point for our model of crime-terror cooperation.35 As Cressey defines it, archeologists first examine documentary sources to collect what is known and develop a map based on what is known. That map allows the investigator to focus on those areas that are not known—that is, the archeologist uses the map to focus on where to dig. The map also serves as a context within which artifacts discovered during the dig can be evaluated for their significance. For example, discovery of a bowl at a certain depth and location can provide information to the investigator concerning the date of an encampment and who established it.

The U.S. Department of Defense defines IPB as an analytical methodology employed to reduce un- certainties concerning the enemy, environment, and terrain for all types of operations. Intelligence preparation of the battlespace builds an extensive database for each potential area in which a unit may be re- quired to operate. The database is then analyzed in detail to determine the impact of the enemy, environment, and terrain on operations and presents it in graphic form.36 Alongside Cressey’s approach, IPB was selected as a second basis of our methodological approach.

Territory outside the control of the central state such as exists in failed or failing states, poorly regulated or border regions (especially those regions surrounding the intersection of multiple borders), and parts of otherwise viable states where law and order is absent or compromised, including urban quarters populated by diaspora communities or penal institutions, are favored locales for crime-terror interactions.

3.2 Implementing PIE as an investigative tool

Organized crime and terrorist groups have significant differences in their organizational form, culture, and goals. Bruce Hoffman notes that terrorist organizations can be further categorized based on their organizational ideology.

In converting IPB to PIE, we defined a series of watch points based on organizational form, goals, culture and other aspects to ensure PIE is flexible enough to compare a transnational criminal syndicate or a traditional crime hierarchy with an ethno-nationalist terrorist faction or an apocalyptic terror group.

The standard operating procedures and means by which military units are expected to achieve their battle plan are called doctrine, which is normally spelled out in great detail as manuals and training regimens. The doctrine of an opposing force thus is an important part of an IPB analysis. Such information is equally important to PIE, but is rarely found in manuals nor is it as highly developed as military doctrines.

Once the organizational forms, terrain and behavior of criminal and terrorist groups were defined at this level of detail, we settled on 12 watch points to cover the three components of PIE. For example, the watch point entitled organizational goals examines what the goals of organized crime and terror groups can tell investigators about potential collaboration or overlap between the two.

Investigators using PIE will collect evidence systematically through the investigation of watch points and analyze the data through its application to one or more indicators. That in turn will enable them to build a case for making timely predictions about crime-terror cooperation or overlap. Conversely, PIE also provides a mechanism for ruling out such links.

The indicators are designed to reduce the fundamental uncertainty associated with seemingly disparate or unrelated pieces of information. They also serve as a way of constructing probable cause, with evidence triggering indicators.

Although some watch points may generate ambiguous indicators of interaction between terror and crime, providing investigators and analysts with negative evidence of collusion between criminals and terrorists also has the practical benefit of steering scarce resources toward higher pay-off areas for detecting cooperation between the groups.

3.3. PIE composition: Watch points and indicators

The first step for PIE is to identify those areas where terror-crime collaborations are most likely to occur. To prepare this environment, PIE asks investigators and analysts to engage in three preliminary analyses. These are first to map where particular criminal and terrorist groups are likely to be operating, both in physical geographic terms and through information traditional and electronic media; secondly, to develop typologies for the behavior patterns of the groups and, when possible, their broader networks (often represented chronologically as a timeline); thirdly, to detail the organizations of specific crime and terror groups and, as feasible, their networks.

The geographical areas where terrorists and criminals are highly likely to be cooperating are known in IPB parlance as named areas of interest, or localities that are highly likely to support military operations. In PIE they are referred to as watch points.

A critical function of PIE is to set sensible priorities for analysts.

The second step of a PIE analysis concentrates on the watch points to identify named areas of inter- action where overlaps between crime and terror groups are most likely. The PIE method expresses areas of interest geographically but remains focused on the overlap between terrorism and organized crime.

the three preliminary analyses mentioned above are deconstructed into watch points, which are broad categories of potential crime-terror interactions.

the use of PIE leads to the early detection of named areas of interest through the analysis of watch points, providing investigators the means of concentrating their focus on terror-crime interactions and thereby enhancing their ability to detect possible terrorist planning.

The third and final step is for the collection and analysis of information that indicates organizational, operational or other nodes whereby criminals and terrorists appear to interact. While watch points are broad categories, they are composed of specific indicators of how organized criminals and terrorists might cooperate. These specific patterns of behavior help to confirm or deny that a watch point is applicable.

If several indicators are present, or if the indicators are particularly clear, this bolsters the evidence that a particular type of terror-crime interaction is present. No single indicator is likely to provide ‘smoking gun’ evidence of a link, although examples of this have occasionally arisen. Instead, PIE is a holistic approach that collects evidence systematically in order to make timely predictions of an affiliation, or not, between specific criminal and terrorist groups.

For policy analysts and planners, indicators reduce the sampling risk that is unavoidable for anyone collecting seemingly disparate and unrelated pieces of evidence. For investigators, indicators serve as a means of constructing probable cause. Indeed, even negative evidence of interaction has the practical benefit of helping investigators and analysts manage their scarce resources more efficiently.

3.4 The PIE approach in practice: Two Cases

the process began with the collection of relevant information (scanning) that was then placed into the larger context of watch points and indicators (codification) in order to produce the aforementioned analytical insights (abstraction).

 

Each case will describe how the TraCCC team shared (diffusion) its findings in or- der to obtain validation and to have an impact on practitioners fighting terrorism and/or organized crime.

3.4.1 The Georgia Case

In 2003-4, TraCCC used the PIE approach to identify one of the largest money laundering cases ever successfully prosecuted. The PIE method helped close down a major international vehicle for money laundering. The ability to organize the financial records from a major money launderer allowed the construction of a significant network that allowed understanding of the linkages among major criminal groups whose relationship has not previously been acknowledged.

Some of the information most pertinent to Georgia included but that was not limited to:

  1. Corrupt Georgian officials held high law enforcement positions prior to the Rose Revolution and maintained ties to crime and terror groups that allowed them to operate with impunity;
  2. Similar patterns of violence were found among organized crime and terrorist groups operating in Georgia;
  3. Numerous banks, corrupt officials and other providers of illicit goods and services assisted both organized crime and terrorists
  4. Regions of the country supported criminal infrastructures useful to organized crime and terrorists alike, including Abkhazia, Ajaria and Ossetia.

Combined with numerous other pieces of information and placed into the PIE watch point structure, the resulting analysis triggered a sufficient number of indicators to suggest that further analysis was warranted to try to locate a crime-terror interaction.

 

The second step of the PIE analysis was to examine information within the watch points for connections that would suggest patterns of interaction between specific crime and terror groups. These points of interaction are identified in the Black Sea case study but the most successful identification was found from an analysis of the watch point that specifically examined the financial environment that would facilitate the link between crime and terrorism.

The TraCCC team began its investigation within this watch point by identifying the sectors of the Georgian economy that were most conducive to economic crime and money laundering. This included such sectors as energy, railroads and banking. All of these sectors were found to be highly criminalized.

Only by having researchers with knowledge of the economic climate, the nature of the business community and the banking sector determined that investigative resources needed to be concentrated on the “G” bank. By knowing the terrain, investigative focus was focused on “G” bank by the newly established financial investigative unit of the Central Bank. A six-month analysis of the G bank and its transactions enabled the development of a massive network analysis that facilitated prosecution in Georgia and may lead to prosecutions in major financial centers that were previously unable to address some crime groups, at least one of which was linked to a terrorist group.

Using PIE allowed a major intelligence breakthrough.

First, it located a large facilitator of dirty money. Second, the approach was able to map fundamental connections between crime and terror groups. Third, the analysis highlighted the enormous role that purely “dirty banks” housed in countries with small economies can provide as a service for transnational crime and even terrorism.

While specific details must remain sealed due to deference to ongoing legal proceedings, to date the PIE analysis has grown into investigations in Switzerland, and others in the US and Georgia.

the PIE approach is one that favors the construction and prosecution of viable cases.

the PIE approach is a platform for starting and later focusing investigations. When coupled with investigative techniques like network analysis, the PIE approach supports the construction and eventual prosecution of cases against organized crime and terrorist suspects.

3.4.2 Russian Closed Cities

In early 2005, a US government agency asked TraCCC to identify how terrorists are potentially trying to take advantage of organized crime groups and corruption to obtain fissile material in a specific region of Russia—one that is home to a number of sensitive weapons facilities located in so-called “closed cities.” The project team assembled a wealth of information concerning the presence and activities of both criminal and terror groups in the region in question, but was left with the question of how best to organize the data and develop significant conclusions.

The project’s information supported connections in 11 watch points, including:

  • A vast increase in the prevalence of violence in the region, especially in economic sectors with close ties to organized crime;
  • Commercial ties in the drug trade between crime groups in the region and Islamic terror groups formerly located in Afghanistan;
  • Rampant corruption in all levels of the regional government and law enforcement mechanisms, rendering portions of the region nearly ungovernable;
  • The presence of numerous regional and transnational crime groups as well as recruiters for Islamic groups on terrorist watch lists;

employment of the watch points prompted creative leads to important connections that were not readily apparent until placed into the larger context of the PIE analytical framework. Specifically, the analysis might not have included evidence of trust links and cultural ties between crime and terror groups had the PIE approach not explained their utility.

When the TraCCC team applied the PIE to the closed cities case, the team found using the technologies reduced time analyzing data while improving the analytical rigor of the task. For example, structured queries of databases and online search engines provided information quickly. Likewise, network mapping improved analytical rigor by codifying the links between numerous actors (e.g., crime groups, terror groups, workers at weapons facilities and corrupt officials) in local, regional and transnational contexts.

3.5 Emergent behavior and automation

The dynamic nature of crime and terror groups complicates the IPB to PIE transition. The spectrum of cooperation demonstrates that crime-terror intersections are emergent phenomena.

PIE must have feedback loops to cope with the emergent behavior of crime and terror groups

when the project team spoke with analysts and investigators, the one deficiency they noted was the ability to conduct strategic intelligence given their operational tempo.

  1. The terror-crime interaction spectrum

In formulating PIE, we recognized that crime and terrorist groups are more diverse in nature than military units. They may be networks or hierarchies, they have a variety of cultures rather than a disciplined code of behavior, and their goals are far less clear. Hoffman notes that terrorist groups can be further categorized based on their organizational ideology.

Other researchers have found significant evidence of interaction between terrorism and organized crime, often in support of the general observation that while their methods might converge, the basic motives of crime and terror groups would serve to keep them at arm’s length—thus the term “methods, not motives.”41 Indeed, the differences between the two are plentiful: terrorists pursue political or religious objectives through overt violence against civilians and military targets. They turn to crime for the money they need to survive and operate.

Criminal groups, on the other hand, are focused on making money. Any use of violence tends to be concealed, and is generally focused on tactical goals such as intimidating witnesses, eliminating competitors or obstructing investigators.

In a corrupt environment, the two groups find common cause.

Terrorists often find it expedient, even necessary, to deal with outsiders to get funding and logistical support for their operations. As such interactions are repeated over time, concerns arise that criminal and terrorist organizations will integrate and might even form new types of organizations.

Support for this point can be found in the seminal work of Sutherland, who has argued that the “in- tensity and duration” of an association with criminals makes an individual more likely to adopt criminal behavior. In conflict regions, where there is intensive interaction between criminals and terrorists, there is more shared behavior and a process of mutual learning that goes on.

The dynamic relationship between international terror and transnational crime has important strategic implications for the United States.

The result is a model known as the terror-crime interaction spectrum that depicts the relationship between terror and criminal groups and the different forms it takes.

Each form of interaction represents different, yet specific, threats, as well as opportunities for detection by law enforcement and intelligence agencies.

An interview with a retired member of the Chicago organized crime investigative unit revealed that it had investigated taxi companies and taxicab owners as cash-based money launderers. Logic suggests that terrorists may also be benefiting from the scheme. But this line of investigation was not pursued in the 9/11 investigations although two of the hijackers had worked as taxi drivers.

Within the spectrum, processes we refer to as activity appropriation, nexus, symbiotic relationship, hybrid, and transformation illustrate the different forms of interaction between a terrorist group and an organized crime group, as well as the behavior of a single group engaged in both terrorism and organized crime.

While activity appropriation does not represent organizational linkages between crime and terror groups, it does capture the merger of methods that were well-documented in section 2. Activity appropriation is one way that terrorists are exposed to organized crime activities and, as Chris Dishman has noted, can lead to a transformation of terror cells into organized crime groups.

Applying the Sutherland principle of differential association, these activities are likely to bring a terror group into regular contact with organized crime. As they attempt to acquire forged documents, launder money, or pay bribes, it is a natural step to draw on the support and expertise of the criminal group, which is likely to have more experience in these activities. It is referred to here as a nexus.

terrorists first engage in “do it yourself” organized crime and then turn to organized crime groups for specialized services like document forgery or money laundering.

In most cases a nexus involves the criminals providing goods and services to terrorists for payment although it can work in both directions. A typically short-term relation- ship, a nexus does not imply that the criminals share the ideological views of the terrorists, merely that the transaction offers benefits to both sides.

After all, they have many needs in common: safe havens, false documentation, evasive tactics, and other strategies to lower the risk of being detected. In Latin America, transnational criminal gangs have employed terrorist groups to guard their drug processing plants. In Northern Ireland, terrorists have provided protection for human smuggling operations by the Chinese Triads.

If the nexus continues to benefit both sides over a period of time, the relationship will deepen. More members of both groups will cooperate, and the groups will create structures and procedures for their business transactions, transfer skills and/or share best practices. We refer to this closer, more sustained cooperation as a symbiotic relationship, and define it as a relationship of mutual benefit or dependence.

In the next stage, the two groups continue to cooperate over a long period and members of the organized crime group begin to share the ideological goals of the terrorists. They grow increasingly alike and finally they merge. That process results in a hybrid or dark network49 that has been memorably described as terrorist by day and criminal by night.50 Such an organization engages in criminal acts but also has a political agenda. Both the criminal and political ends are forwarded by the use of violence and corruption.

These developments are not inevitable, but result from a series of opportunities that can lead to the next stage of cooperation. It is important to recognize, however, that even once the two groups have reached the point of hybrid, there is no reason per se to suspect that transformation will follow. Likewise, a group may persist with borrowed methods indefinitely without ever progressing to cooperation. In Italy and elsewhere, crime groups that also engaged in terrorism never found a terrorist partner and thus remained at the activity appropriation stage. Eventually they ended their terrorist activities and returned to the exclusive pursuit of organized crime.

Interestingly, the TraCCC team found no example where a terrorist group engaging in organized crime, either through activity appropriation or through an organizational linkage, came into conflict with a criminal group.51 Neither archival sources nor our interviews revealed such a conflict over “turf,” though logic would suggest that organized crime groups would react to such forms of competition.

The spectrum does not create exact models of the evolution of criminal-terrorist cooperation. In- deed, the evidence presented both here and in prior studies suggests that a single evolutionary path for crime-terror interactions does not exist. Environmental factors outside the control of either organization and the varied requirements of specific organized crime or terrorist groups are but two of the reasons that interactions appear more idiosyncratic than generalizable.

Using the PIE method, investigators and analysts can gain an understanding of the terror-crime intersection by analyzing evidence sourced from communications, financial transactions, organizational charts, and behavior. They can also apply the methodology to analyze watch points where the two entities may interact. Finally, using physical, electronic, and data surveillance, they can develop indicators showing where watch points translate into practice.

  1. The significance of terror-crime interactions in geographic terms

Some shared characteristics arose from examining this case. First, both neighborhoods shared similar diaspora compositions and a lack of effective or interested policing. Second, both terror cells had strong connections to the shadow economy.

the case demonstrated that each cell shared three factors—poor governance, a sense of ethnic separation amongst the cell (supported by the nature of the larger diaspora neighborhoods), and a tradition of organized crime.

U.S. intelligence and law enforcement are naturally inclined to focus on manifestations of organized crime and terrorism in their own country, but they would benefit from studying and assessing patterns and behavior of crime in other countries as well as areas of potential relevance to terrorism.

When turning to the situation overseas, one can differentiate between longstanding crime groups and their more recently formed counterparts according to their relationship to the state. With the exception of Colombia, rarely do large, established (i.e., “traditional”) crime organizations link with terrorists. These groups possess long-held financial interests that would suffer should the structures of the state and the international financial community come to be undermined. Through corruption and movement into the lawful economy, these groups minimize the risk of prosecution and therefore do not fear the power of state institutions.

Developing countries with weak economies, a lack of social structures, many desperate, hungry people, and a history of unstable government are both relatively likely to provide ideological and economic foundations for both organized crime and terrorism within their borders and relatively unlikely to have much capacity to combat either of them. Conflict zones have traditionally provided tremendous opportunities for smuggling and corruption and reduced oversight capacities, as regulatory and enforcements be- come almost solely directed at military targets. They are therefore especially vulnerable to both serious organized crime and violent activity directed at civilian populations for political goals – as well as cooperation between those engaging in pure criminal activities and those engaging in politically-motivated violence.

Post-conflict zones are also likely to spawn such cooperation; as such areas often retain weak enforcement capacity for some time following an end to formal hostilities.

these patterns of criminal behavior and organization can arise from areas as diverse as conflict zones overseas (which then tend can replicate once they arrive in the U.S.) to neighborhoods in U.S. cities. The problematic combinations of poor governance, ethnic separation from larger society, and a tradition of criminal activity (frequently international) are the primary concerns behind this broad taxonomy of geographic locales for crime-terror interaction.

  1. Watch points and indicators

Taking the evidence of cooperation between organized crime and terrorism, we have generated 12 specific areas of interaction, which we refer to as watch points. In turn these watch points are subdivided into a number of indicators that point out where interaction between terror and crime may be taking place.

These watch points cover a variety of habits and operating modes of organized crime and terrorist groups.

We have organized our watch points into three categories: environmental, organizational, and behavioral. Each of the following sections details one of the twelve watch points.

 

Watch Point 1: Open activities in the legitimate economy

Watch Point 2: Shared illicit nodes

Watch Point 3: Communications

Watch Point 4: Use of information technology (IT)

Watch Point 5: Violence

Watch Point 6: Use of corruption

Watch Point 7: Financial transactions & money laundering

Watch Point 8: Organizational structures

Watch Point 9: Organizational goals

Watch Point 10: Culture

Watch Point 11: Popular support

Watch Point 12: Trust

 

6.1. Watch Point 1: Open activities in the legitimate economy

The many indicators of possible links include habits of travel, the use of mail and courier services, and the operation of fronts.

Organized crime and terror may be associated with subterfuge and secrecy, but both criminal types engage legitimate society quite openly for particular political purposes. Yet in the first instance, criminal groups are likely to leave greater “traces,” especially when they operate in societies with functioning governments, than do terrorist groups.

Terrorist groups usually seek to make common cause with segments of society that will support their goals, particularly the very poor and the disadvantaged. Terrorists usually champion repressed or dis- enfranchised ethnic and religious minorities, describing their terrorist activities as mechanisms to pressure the government for greater autonomy and freedom, even independence, for these minorities… the openly take responsibility for their attacks, but their operational mechanisms are generally kept secret, and any ongoing contacts they may have with legitimate organizations are carefully hidden.

Criminal groups, like terrorists, may have political goals. For example, such groups may seek to strengthen their legitimacy through donating some of their profits to charity. Colombian drug traffickers are generous in their support of schools and local sports teams.5

criminals of all types could scarcely carry out criminal activities, maintain their cover, and manage their money flows without doing legal transactions with legitimate businesses.

Travel: Frequent use of passenger carriers and shipping companies are potential indicators of illicit activity. Clues can be gleaned from almost any pattern of travel that can be identified as such.

Mail and courier services: Indicators of interaction are present in the tracking information on international shipments of goods, which also generate customs records. Large shipments require bills-of-lading and other documentation. Analysis of such transactions, cross-referenced with in- formation on crime databases, can identify links between organized crime and terrorist groups.

Fronts: A shared front company or mutual connections to legitimate businesses are clearly also indicators of interaction.

Watch Point 2: Shared illicit nodes

 

The significance of overt operations by criminal groups should not be overstated. Transnational crime and terror groups alike carry out their operations for the most part with illegal and undercover methods. There are many similarities in these tactics. Both organized criminals and terrorists need forged pass- ports, driver’s licenses, and other fraudulent documents. Dishonest accountants and bankers help criminals launder money and commit fraud. Arms and explosives, training camps and safe houses are other goods and services that terrorists obtain illicitly.

Fraudulent Documents. Groups of both types may use the same sources of false documents,

or the same techniques, indicating cooperation or overlap. A criminal group often develops an expertise in false document production as a business, expanding production and building a customer base.

 

Some of the 9/11 hijackers fraudulently obtained legitimate driver’s licenses through a fraud ring based at an office of DMV in the Virginia suburbs of Washington, DC. Ac- cording to an INS investigator, this ring was under investigation well before the 9/11 attacks, but there was insufficient political will inside the INS to take the case further.

Arms Suppliers. Both terror and organized crime might use the same supplier, or the same distinctive method of doing business, such as bartering weapons or drugs. In 2001 the Basque terror group ETA contracted with factions of the Italian Camorra to obtain missile launchers and ammunition in return for narcotics.

Financial experts. Bankers and financial professionals who assist organized crime might also have terrorist affiliations. The methods of money laundering long used by narcotics traffickers and other organized crime have now been adopted by some terrorist groups.

 

Drug Traffickers. Drug trafficking is the single largest source of revenues for international organized crime. Substantial criminal groups often maintain well-established smuggling routes to distribute drugs. Such an infrastructure would be valuable to terrorists who purchased weapons of mass destruction and needed to transport them.

 

Other Criminal Enterprises. An increasing number of criminal enterprises outside of narcotics smuggling are serving the financial or logistical ends of terror groups and thus serve as nodes of interaction. For example, piracy on the high seas, a growing threat to maritime commerce, often depends on the collusion of port authorities, which are controlled in many cases by organized crime.

These relationships are particularly true of developed countries with effective law enforcement, since criminals obviously need to be more cautious and often restrict their operations to covert activity. In conflict zones, however, criminals of all types feel even less restraint about flaunting their illegal nature, since there is little chance of being detected or apprehended.

Watch Point 3: Communications

 

The Internet, mobile phones and satellite communications enable criminals and terrorists to communicate globally in a relatively secure fashion. FARC, in concert with Colombian drug cartels, offered training on how to set up narcotics trafficking businesses used secure websites and email to handle registration.

Such scenarios are neither hypothetical nor anecdotal. Interviews with an analyst at the US Drug Enforcement Administration revealed that narcotics cartels were increasingly using encryption in their digital communications. In turn, the agent interviewed stated that the same groups were frequently turning to information technology experts to provide them encryption to help secure their communications.

Nodes of interaction therefore include:

  • Technical overlap: Examples exist where organized crime groups opened their illegal communications systems to any paying customer, thus providing a service to other criminals and terrorists among others. For example, a recent investigation found clandestine telephone exchanges in the Tri-Border region of South America that were connected to Jihadist networks. Most were located in Brazil, since calls between Middle Eastern countries and Brazil would elicit less suspicion and thus less chance of electronic eavesdropping.
  • Personnel overlap: Crime and terror groups that recruit common high-tech specialists to their cause. Given their ability to encrypt messages, criminals of all kinds may rely on outsiders to carry the message. Smuggling networks all have operatives who can act as couriers, and terrorists have networks of sympathizers in ethnic diasporas who can also help.

Watch Point 4: Use of information technology (IT)

 

Organized crime has devised IT-based fraud schemes such as online gambling, securities fraud, and pirating of intellectual property. Such schemes appeal to terror groups, too, particularly given the relative anonymity that digital transactions offer. Investigators into the Bali disco bombing of 2002 found that the laptop computer of the ringleader, Imam Samudra, contained a primer he authored on how to use online fraud to finance operations. Evidence of terror groups’ involvement is a significant set of indicators of cooperation or overlap.

Indicators of possible cooperation or nodes of interaction include:

Fundraising: Online fraud schemes and other uses of IT for obtaining ill-gotten gains are already well-established by organized crime groups and terrorists are following suit. Such IT- assisted criminal activities serve as another node of overlap for crime and terror groups, and thus expand the area of observation beyond the brick-and-mortar realm into cyberspace (i.e., investigators now expect to find evidence of collaboration on the Internet or in email as much as through telephone calls or postal services).

  • Use of technical experts: While no evidence exists that criminals and terrorists have directly cooperated to conduct cybercrime or cyberterrorism, they are often served by the same technical experts.

Watch Point 5: Violence

 

Violence is not so much a tactic of terrorists as their defining characteristic. These acts of violence are designed to obtain publicity for the cause, to create a climate of fear, or to provoke political repression, which they hope will undermine the legitimacy of the authorities. Terrorist attacks are deliberately highly visible in order to enhance their impact on the public consciousness. Indiscriminate violence against innocent civilians is therefore more readily ascribed to terrorism.

no examples exist where terrorists have engaged criminal groups for violent acts.

A more significant challenge lies in trying to discern generalities about organized crime’s patterns of violence. Categorizing patterns of violence according to their scope or their promulgation is suspect. In the past, crime groups have used violence selectively and quietly to achieve their goals, but then have also used violence broadly and loudly to achieve other goals. Neither can one categorize organized crime’s violence according to goals as social, political and economic considerations often overlap in every attack or campaign.

Violence is therefore an important watch point that may not yield specific indicators of crime-terror interaction per se but can serve to frame the likelihood that an area might support terror-crime interaction.

Watch Point 6: Use of corruption

 

Both terrorists and organized criminals bribe government officials to undermine the work of law enforcement and regulation. Corrupt officials assist criminals by exerting pressure on businesses that refuse to cooperate with organized crime groups, or by providing passports for terrorists. The methods of corruption are diverse on both sides and include payments, the provision of illegal goods, the use of compromising information to extort cooperation, and outright infiltration of a government agency or other target.

Many studies have demonstrated that organized crime groups often evolve in places where the state cannot guarantee law or order, or provide basic health care, education, and social services. The absence of effective law enforcement combines with rampant corruption to make well-organized criminals nearly invulnerable.

Colombia may be the only example of a conflict zone where a major transnational crime group with very large profits is directly and openly connected to terrorists. The interaction between the FARC and ELN terror groups and the drug syndicates provides crucial important financial resources for the guerillas to operate against the Colombian state – and against each another. This is facilitated by universal corruption, from top government officials to local police. Corruption has served as the foundation for the growth of the narcotics cartels and insurgent/terrorist groups.

In the search for indicators, it would be simplistic to look for a high level of corruption, particularly in conflict zones. Instead, we should pose a series of questions:

Cooperation Are terrorist and criminal groups working together to minimize cost and maximize leverage from corrupt individuals and institutions?

Division of labor Are terrorist and criminal groups purposefully corrupting the areas they have most contact with? In the case of crime groups, that would be law enforcement and the judiciary; in the case of terrorists, the intelligence and security services.

  • Autonomy Are corruption campaigns carried out by one or both groups completely independent of the other?

These indicators can be applied to analyze a number of potential targets of corruption. Personnel that can provide protection or services are often mentioned as the target of corruption. Examples include law enforcement, the judiciary, border guards, politicians and elites, internal security agents and Consular officials. Economic aid and foreign direct investment are also targeted as sources of funds by criminals and terrorists that they can access by means of corruption.

 

Watch Point 7: Financial transactions & money laundering

 

despite the different purposes that may be involved in their respective uses of financial institutions (organized crime seeking to turn illicit funds into licit funds; terrorists seeking to move licit funds to use them for illicit means), the groups tend to share a common infrastructure for carrying out their financial activities. Both types of groups need reliable means of moving, and laundering money in many different jurisdictions, and as a result, both use similar methods to move money internationally. Both use charities and front groups as a cover for money flows.

Possible indicators include:

  • Shared methods of money laundering
  • Mutual use of known front companies and banks, as well as financial experts.

Watch Point 8: Organizational structures

 

The traditional model of organized crime used by U.S. law enforcement is that of the Sicilian Mafia – a hierarchical, conservative organization embedded in the traditional social structures of southern Italy… among today’s organized crime groups the Sicilian mafia is more of an exception than the rule.

Most organized crime now operates not as a hierarchy but as a decentralized, loose-knit network – which is a crucial similarity to terror groups. Networks offer better security, make intelligence-gathering more efficient, cover geographic distances and span diverse memberships more effectively.

Membership dynamics Both terror and organized crime groups – with the exception of the Sicilian Mafia and other traditional crime groups (i.e., Yakuza) – are made up of members with loose, relatively short-term affiliations to each other and even to the group itself. They can readily be recruited by other groups. By this route, criminals have become terrorists.

Scope of organization Terror groups need to make constant efforts to attract and recruit new members. Obvious attempts to attract individuals from crime groups are a clear indication of co- operation. An intercepted phone conversation in May 2004 by a suspected terrorist called Rabei Osman Sayed Ahmed revealed his recruitment tactics: “You should also know that I have met other brothers, that slowly I have created with a few things. First, they were drug pushers, criminals, I introduced them to the faith and now they are the first ones who ask when the moment of the jihad will be…”

Need to buy, wish to sell Often the business transactions between the two sides operate in both directions. Terrorist groups are not just customers for the services of organized crime, but often act as suppliers, too. Arms supply by terrorists is particularly marked in certain conflict zones. Thus, any criminal group found to be supplying outsiders with goods or services should be investigated for its client base too.

Investigators who discovered the money laundering in the above example were able to find out more about the terrorists’ activities too. The Islamic radical cell that planned the Madrid train bombings of 2004 was required to support itself financially through a business venture despite its initial funding by Al Qaeda.

Watch Point 9: Organizational goals

 

In theory, their different goals are what set terrorists apart from the perpetrators of organized crime. Terrorist groups are most often associated with political ends, such as change in leadership regimes or the establishment of an autonomous territory for a subnational group. Even millenarian and apocalyptic terrorist groups, such as the science-fiction mystics of Aum Shinrikyo, often include some political objectives. Organized crime, on the other hand, is almost always focused on personal enrichment.

By cataloging the different – and shifting – goals of terror and organized crime groups, we can develop indicators of convergence or divergence. This will help identify shared aspirations or areas where these aims might bring the two sides into conflict. On this basis, investigators can ask what conditions might prompt either side to adopt new goals or to fall back to basic goals, such as self-preservation.

Long view or short-termism

Affiliations of protagonists

 

Watch Point 10: Culture

 

Both terror and criminal groups use ideologies to maintain their internal identity and provide external justifications for their activities. Religious terror groups adopt and may alter the teachings of religious scholars to suggest divine support for their cause, while Italian, Chinese, Japanese, and other organized crime groups use religious and cultural themes to win public acceptance. Both types use ritual and tradition to construct and maintain their identity. Tattoos, songs, language, and codes of conduct are symbolic to both.

Religious affiliations, strong nationalist sentiments and strong roots in the local community are often characteristics that cause organized criminals to shun any affiliation with terrorists. Conversely, the absence of such affiliations means that criminals have fewer constraints keeping them from a link with terrorists.

In any organization, culture connects and strengthens ties between members. For networks, cultural features can also serve as a bridge to other networks.

  • Religion Many criminal and terrorist groups feature religion prominently.
  • Nationalism Ethno-nationalist insurgencies and criminal groups with deep historical roots are particularly likely to play the nationalist card.
  • Society Many criminal and terrorist networks adapt cultural aspects of the local and regional societies in which they operate to include local tacit knowledge, as contained in narrative traditions. Manuel Castells notes the attachment of drug traffickers to their country, and to their regions of origin. “They were/are deeply rooted in their cultures, traditions, and regional societies. …they have also revived local cultures, rebuilt rural life, strongly affirmed their religious feeling, and their beliefs in local saints and miracles, supported musical folklore (and were rewarded with laudatory songs from Colombian bards)…”

Watch Point 11: Popular support

 

Both organized crime and terrorist groups engage legitimate society in furtherance of their own agendas. In conflict zones, this may be done quite openly, while under the rule of law they are obliged to do so covertly. One way of doing so is to pay lip service to the interests of certain ethnic groups or social classes. Organized crime is particularly likely to make an appeal to disadvantaged people or people in certain professionals though paternalistic actions that make them a surrogate for the state. For instance, the Japanese Yakuza crime groups provided much-needed assistance to the citizens of Kobe after the serious earthquake there. Russian organized crime habitually supports cultural groups and sports troupes.

 

Both crime and terror derive crucial power and prestige through the support of their members and of some segment of the public at large. This may reflect enlightened self-interest, when people see that the criminals are acting on their behalf and improving their well-being and personal security. But it is equally likely to be that people are afraid to resist a violent criminal group in their neighborhood

This quest for popular support and common cause suggests various indicators:

  • Sources Terror groups seek and sometimes obtain the assistance of organized crime based on the perceived worthiness of the terrorist cause, or because of their common cause against state authorities or other sources of opposition. In testimony before the U.S. House Committee on International Relations, Interpol Secretary General Ronald Noble made this point. One of his examples was that Lebanese syndicates in South America send funds to Hezbollah.
  • Means Groups that cooperate may have shared activities for gaining popular support such as political parties, labor movements, and the provision of social services.
  • Places In conflict zones where the government has lost authority to criminal groups, social welfare and public order might be maintained by the criminal groups that hold power.

 

Watch Point 12: Trust

Like business corporations, terrorist and organized crime groups must attract and retain talented, dedicated, and loyal personnel. These skills are at an even greater premium than in the legitimate economy because criminals cannot recruit openly. A further challenge is that law enforcement and intelligence services are constantly trying to infiltrate and dismantle criminal networks. Members’ allegiance to any such group is constantly tested and demonstrated through rituals such as the initiation rites…

We propose three forms of trust in this context, using as a basis Newell and Swan’s model for inter- personal trust within commercial and academic groups.94

Companion trust based on goodwill or personal friendships… In this context, indicators of terror-crime interaction would be when members of the two groups use personal bonds based on family, tribe, and religion to cement their working relationship. Efforts to recruit known associates of the other group, or in common recruiting pools such as diasporas, would be another indicator.

Competence trust, which Newell and Swan define as the degree to which one person depends upon another to perform the expected task.

Commitment or contract trust, where all actors understand the practical importance of their role in completing the task at hand.

  1. Case studies

7.1. The Tri-Border Area of Paraguay, Brazil, and Argentina

Chinese Triads such as the Fuk Ching, Big Circle Boys, and Flying Dragons are well established and believed to be the main force behind organized crime in CDE.

CDE is also a center of operations for several terrorist groups, including Al Qaeda, Hezbollah, Islamic Jihad, Gamaa Islamiya, and FARC.

Watch points

Crime and terrorism in the Tri-Border Area interact seamlessly, making it difficult to draw a clean line be- tween the types of persons and groups involved in each of these two activities. There is no doubt, however, that the social and economic conditions allow groups that are originally criminal in nature and groups whose primary purpose is terrorism to function and interact freely.

Organizational structure

Evidence from CDE suggests that some of the local structures used by both groups are highly likely to overlap. There is no indication, however, of any significant organizational overlap between the criminal and terrorist groups. Their cooperation, when it exists, is ad hoc and without any formal or lasting agreements, i.e., activity appropriation and nexus forms only.

Organizational goals

In this region, the short-term goals of criminals and terrorists converge. Both benefit from easy border crossings and the networks necessary to raise funds.

Culture Cultural affinities between criminal and terrorist groups in the Tri-Border Area include shared ethnicities, languages and religions.

It emerged that 400 to 1000 kilograms of cocaine may have been shipped on a monthly basis through the Tri-Border Area on its way to Sao Paulo and thence to the Middle East and Europe

Numerous arrests revealed the strong ties between entrepreneurs in CDE and criminal and potentially terrorist groups. From the evidence in CDE it seems that the two phenomena operate in rather separate cultural realities, focusing their operations within ethnic groups. But nor does culture serve as a major hindrance to cooperation between organized crime and terrorists.

Illicit activities and subterfuge

The evidence in CDE suggests that terrorists see it as logical and cost-effective to use the skills, contacts, communications and smuggling routes of established criminal networks rather than trying to gain the requisite experience and knowledge themselves. Likewise, terrorists appear to recognize that to strike out on their own risks potential turf conflicts with criminal groups.

There is a clear link between Hong Kong-based criminal groups that specialize in large-scale trafficking of counterfeit products such as music albums and software, and the Hezbollah cells active in the Tri-Border Area. Within their supplier-customer relationship, the Hong Kong crime groups smuggle contraband goods into the region and deliver them to Hezbollah operatives, who in turn profit from their sale. The proceeds are then used to fund the terrorist groups.

Open activities in the legitimate economy

The knowledge and skills potential of CDE is tremendous. While no specific examples exist to connect terrorist and criminal groups through the purchase of legal goods and services, it is obvious that the likelihood of this is high, given how the CDE economy is saturated with organized crime.

Support or sustaining activities

The Tri-Border Area has an usually large and efficient transport infrastructure, which naturally assists organized crime. In turn, the many criminals and terrorists using cover require a sophisticated and reliable document forgery industry. The ease with which these documents can be obtained in CDE is an indicator of cooperation between terrorists and criminals.

Brazilian intelligence services have evidence that Osama bin Laden visited CDE in 1995 and met with the members of the Arab community in the city’s mosque to talk about his experience as a mujahadeen fighter in the Afghan war against the Soviet Union.

Use of violence

Contract murder in CDE costs as little as one thousand dollars, and the frequent violence in CDE is directed at business people who refuse to bend to extortion by terror groups. Ussein Mohamed Taiyen, president of the CDE Chamber of Commerce, was one such victim—murdered because he refused to pay the tax.

Financial transactions and money laundering in 2000, money laundering in the Tri-Border Area was estimated at 12 billion U.S. dollars annually.

As many as 261 million U.S. dollars annually has been raised in Tri-Border Area and sent overseas to fund the terrorist activities of Hezbollah, Hamas, and Islamic Jihad.

Use of corruption

Most of the illegal activities in the Tri-Border Area bear the hallmark of corruption. In combination with the generally low effectiveness of state institutions, especially in Paraguay, and high level of corruption in that country, CDE appears to be a perfect environment for the logistical operations of both terrorists and organized criminals.

Even the few bona fide anti-corruption attempts made by the Paraguayan government have been under- mined because of the pervasive corruption, another example being the attempts to crack down on the Chinese criminal groups in CDE. The Consul General of Taiwan in CDE, Jorge Ho, stated that the Chinese groups were successful in bribing Paraguayan judges, effectively neutralizing law enforcement moves against the criminals.122

The other watch points described earlier – including fund raising and use of information technology – can also be illustrated with similar indicators of possible cooperation between terror and organized crime.

In sum, for the investigator or analyst seeking examples of perfect conditions for such cooperation, the Tri-Border Area is an obvious choice.

7.2. Crime and terrorism in the Black Sea region

Illicit or veiled operations Cigarette, drugs and arms smuggling have been major sources of financing of all the terrorist groups in the region.

Cigarette and alcohol smuggling has fueled the Kurdish-Turkish conflict as well as the terrorist violence in both the Abkhaz and Ossetian conflicts.

From the very beginning, the Chechen separatist movement had close ties with the Chechen crime rings in Russia, mainly operating in Moscow. These crime groups provided and some of them still provide financial sup- port for the insurgents.

  1. Conclusion and recommendations

The many examples in this report of cooperation between terrorism and organized crime make clear that the links between these two potent threats to national and global security are widespread, dynamic, and dangerous. It is only rational to consider the possibility that an effective organized crime group may have a connection with terrorists that has gone unnoticed so far.

Our key conclusion is that crime is not a peripheral issue when it comes to investigating possible terrorist activity. Efforts to analyze the phenomenon of terrorism without considering the crime component undermine all counter-terrorist activities, including those aimed at protecting sites containing weapons of mass destruction.

Yet the staffs of intelligence and law enforcement agencies in the United States are already over- whelmed. Their common complaint is that they do not have the time to analyze the evidence they possess, or to eliminate unnecessary avenues of investigation. The problem is not so much a dearth of data, but the lack of suitable tools to evaluate that data and make optimal decisions about when, and how, to investigate further.

Scrutiny and analysis of the interaction between terrorism and organized crime will become a matter of routine best practice. Aware- ness of the different forms this interaction takes, and the dynamic relationship between them, will become the basis for crime investigations, particularly for terrorism cases.

In conclusion, our overarching recommendation is that crime analysis must be central to understanding the patterns of terrorist behavior and cannot be viewed as a peripheral issue.

For policy analysts:

  1. More detailed analysis of the operation of illicit economies where criminals and terrorists interact would improve understanding of how organized crime operates, and how it cooperates with terrorists. Domestically, more detailed analysis of the businesses where illicit transactions are most common would help investigation of organized crime – and its affiliations. More focus on the illicit activities within closed ethnic communities in urban centers and in prisons in developed countries would prove useful in addressing potential threats.
  2. Corruption overseas, which is so often linked to facilitating organized crime and terrorism, should be elevated to a U.S. national security concern with an operational focus. After all, many jihadists are recruited because they are disgusted with the corrupt governments in their home countries. Corruption has facilitated the commission of criminal acts such as the Chechen suicide bombers who bribed airport personnel to board aircraft in Moscow.
  3. Analysts must study patterns of organized crime-terrorism interaction as guidance for what maybe observed subsequently in the United States.
  4. Intelligence and law enforcement agencies need more analysts with the expertise to understand the motivations and methods of criminal and terrorist groups around the globe, and with the linguistic and other skills to collect and analyze sufficient data.

For investigators:

  1. The separation of criminals and terrorists is not always as clear cut as many investigators believe. Crime and terrorists’ groups are often indistinguishable in conflict zones and in prisons.
  2. The hierarchical structure and conservative habits of the Sicilian Mafia no longer serves as an appropriate model for organized crime investigations. Most organized crime groups now operate as loose networked affiliations. In this respect they have more in common with terrorist groups.
  3. The PIE method provides a series of indicators that can result in superior profiles and higher- quality risk analysis for law enforcement agencies both in the United States and abroad. The approach can be refined with sensitive or classified information.
  4. Greater cooperation between the military and the FBI would allow useful sharing of intelligence, such as the substantial knowledge on crime and illicit transactions gleaned by the counterintelligence branch of the U.S. military that is involved in conflict regions where terror-crime interaction is most profound.
  5. Law enforcement personnel must develop stronger working relationships with the business sector. In the past, there has been too little cognizance of possible terrorist-organized crime interaction among the clients of private-sector business corporations and banks. Law enforcement must pursue evidence of criminal affiliations with high status individuals and business professionals who are often facilitators of terrorist financing and money laundering. In the spirit of public-private partnerships, corporations and banks should be placed under an obligation to watch for indications of organized crime or terrorist activity by their clients and business associates. Furthermore, they should attempt to analyze what they discover and to pass on their assessment to law enforcement.
  6. Law enforcement personnel posted overseas by federal agencies such as the DEA, the Department of Justice, the Department of Homeland Security, and the State Department’s Bureau of International Narcotics and Law Enforcement should be tasked with helping to develop a better picture of the geography of organized crime and its most salient features (i.e., the watch points of the PIE approach). This should be used to assist analysts in studying patterns of crime behavior that put American interests at risk overseas and alert law enforcement to crime patterns that may shortly appear in the U.S.
  7. Training for law enforcement officers at federal, state, and local level in identifying authentic and forged passports, visas, and other documents required for residency in the U.S. would eliminate a major shortcoming in investigations of criminal networks.

 

 

 

 

 

 

 

 

 

 

 

A.1 Defining the PIE Analytical Process

In order to begin identifying the tools to support the analytical process, the process of analysis itself first had to be captured. The TraCCC team adopted Max Boisot’s (2003) I-Space as a representation for de- scribing the analytical process. As Figure A-1 illustrates, I-Space provides a three-dimensional representation of the cognitive steps that constitute analysis in general and the utilization of the PIE methodology in particular. The analytical process is reduced to a series of logical steps, with one step feeding the next until the process starts anew. The steps are:

  1. Scanning
    2. Codification 3. Abstraction 4. Diffusion
    5. Validation 6. Impacting

Over time, repeated iterations of these steps result in more and more PIE indicators being identified, more information being gathered, more analytical product being generated, and more recommendations being made. Boisot’s I-Space is described below in terms of law enforcement and intelligence analytical processes.

A.1.1. Scanning

The analytical process begins with scanning, which Boisot defines as the process of identifying threats and opportunities in generally available but often fuzzy data. For example, investigators often scan avail- able news sources, organizational data sources (e.g., intelligence reports) and other information feeds to identify patterns or pieces of information that are of interest. Sometimes this scanning is performed with a clear objective in mind (e.g., set up through profiles to identify key players). From a tools perspective, scanning with a focus on a specific entity like a person or a thing is called a subject-based query. At other times, an investigator is simply reviewing incoming sources for pieces of a puzzle that is not well under- stood at that moment. From a tools perspective, scanning with a focus on activities like money laundering or drug trafficking is called a pattern-based query. For this type of query, a specific subject is not the target, but a sequence of actors/activities that form a pattern of interest.

Many of the tools described herein focus on either:

o Helping an investigator build models for these patterns then comparing those models against the data to find ‘matches’, or

o Supporting automated knowledge discovery where general rules about interesting patterns are hypothesized and then an automated algorithm is employed to search through large amounts of data based on those rules.

The choice between subject-based and pattern-based queries is dependent on several factors including the availability of expertise, the size of the data source to be scanned, the amount of time available and, of course, how well the subject is understood and anticipated. For example, subject-based queries are by nature more tightly focused and thus are often best conducted through keyword or Boolean searches, such as a Google search containing the string “Bin Laden” or “Abu Mussab al-Zarqawi.” Pattern-based queries, on the other hand, support a relationship/discovery process, such as an iterative series of Google searches starting at ‘with all of the words’ terrorist, financing, charity, and hawala, proceeding through ‘without the words’ Hezbollah and Iran and culminating in ‘with the exact phrase’ Al Qaeda Wahabi charities. Regard- less of which is employed, the results provide new insights into the problem space. The construction, employment, evaluation, and validation of results from these various types of scanning techniques will pro- vide a focus for our tool exploration.

A.1.2. Codification

In order for the insights that result from scanning to be of use to the investigator, they must be placed into the context of the questions that the investigator is attempting to answer. This context provides structure through a codification process that turns disconnected patterns into coherent thoughts that can be more easily communicated to the community. The development of indicators is an example of this codification. Building up network maps from entities and their relationships is another example that could sup- port indicator development. Some important tools will be described that support this codification step.

A.1.3. Abstraction

During the abstraction phase, investigators generalize the application of newly codified insights to a wider range of situations, moving from the specific examples identified during scanning and codification towards a more abstract model of the discovery (e.g., one that explains a large pattern of behavior or predicts future activities). Indicators are placed into the larger context of the behaviors that are being monitored. Tools that support the generation and maintenance of models that support this abstraction process

81

will be key to making the analysis of an overwhelming number of possibilities and unlimited information manageable.

A.1.4. Diffusion

Many of the intelligence failures cited in the 9/11 Report were due to the fact that information and ideas were not shared. This was due to a variety of reasons, not the least of which were political. Technology also built barriers to cooperation, however. Information can only be shared if one of two conditions is met. Either the sender and receiver must share a context (a common language, background, understanding of the problem) or the information must be coded and abstracted (see steps 2 and 3 above) to extract it from the personal context of the sender to one that is generally understood by the larger community. Once this is done, the newly created insights of one investigator can be shared with investigators in sister groups.

The technology for the diffusion itself is available through any number of sources ranging from repositories where investigators can share information to real-time on-line cooperation. Tools that take advantage of this technology include distributed databases, peer-to-peer cooperation environments and real- time meeting software (e.g., shared whiteboards).

A.1.5. Validation

In this step of the process, the hypotheses that have been formed and shared are now validated over time, either by a direct match of the data against the hypotheses (i.e., through automation) or by working towards a consensus within the analytical community. Some hypotheses will be rejected, while others will be retained and ranked according to probability of occurrence. In either case, tools are needed to help make this match and form this consensus.

A.1.6. Impacting

Simply validating a set of hypotheses is not enough. If the intelligence gathering community stops at that point, the result is a classified CNN feed to the policy makers and practitioners. The results of steps 1 through 5 must be mapped against the opposing landscape of terrorism and transnational crime in order to understand how the information impacts the decisions that must be taken. In this final step, investigators work to articulate how the information/hypotheses they are building impact the overall environment and make recommendations on actions (e.g., probes) that might be taken to clarify that environment. The con- sequences of the actions taken as a result of the impacting phase are then identified during the scanning phase and the cycle begins again.

A.1.7. An Example of the PIE Analytical Approach

While section 4 provided some real-life examples of the PIE approach in action, a retrodictive analysis of terror-crime cooperation in the extraction, smuggling, and sale of conflict diamonds provides a grounding example of Boisot’s six step analytical process. Diamonds from West Africa were a source of funding for various factions in the Lebanese civil war since the 1980s. Beginning in the late 1990s intelligence, law enforcement, regulatory, non-governmental, and press reports suggested that individuals linked to transnational criminal smuggling and Middle Eastern terrorist groups were involved in Liberia’s illegal diamond trade. We would expect to see the following from an investigator assigned to track terrorist financing:

  1. Scanning: During this step investigators could have assembled fragmentary reports to reveal crude patterns that indicated terror-crime interaction in a specific region (West Africa), involving two countries (Liberia and Sierra Leone) and trade in illegal diamonds.
  2. Codification: Based on patterns derived from scanning, investigators could have codified the terror- crime interaction by developing explicit network maps that showed linkages between Russian arms dealers, Russian and South American organized crime groups, Sierra Leone insurgents, the government of Liberia, Al Qaeda, Hezbollah, Lebanese and Belgian diamond merchants, and banks in Cyprus, Switzerland, and the U.S.
  3. Abstraction: The network map developed via codification is essentially static at this point. Utilizing social network analysis techniques, investigators could have abstracted this basic knowledge to gain a dynamic understanding of the conflict diamond network. A calculation of degree, betweenness, and closeness centrality of the conflict diamond network would have revealed those individuals with the most connections within the network, those who were the links between various subgroups within the network, and those with the shortest paths to reach all of the network participants. These calculations would have revealed that all the terrorist links in the conflict diamond network flowed through Ibra- him Bah, a Libyan-trained Senegalese who had fought with the mujahadeen in Afghanistan and whom Charles Taylor, then President of Liberia, had entrusted to handle the majority of his diamond deals. Bah arranged for terrorist operatives to buy all diamonds possible from the RUF, the Charles Taylor- supported rebel army that controlled much of neighboring civil-war-torn Sierra Leone. The same calculations would have delineated Taylor and his entourage as the key link to transnational criminals in the network, and the link between Bah and Taylor as the essential mode of terror-crime interaction for purchase and sale of conflict diamonds.
  4. Diffusion: Disseminating the results of the first three analytical steps in this process could have alerted investigators in other domestic and foreign law enforcement and intelligence agencies to the emergent terror-crime nexus involving conflict diamonds in West Africa. Collaboration between various security services at this junction could have revealed Al Qaeda’s move into commodities such as diamonds, gold, tanzanite, emeralds, and sapphires in the wake of the Clinton Administration’s freezing of 240 million dollars belonging to Al Qaeda and the Taliban in Western banks in the aftermath of the August 1998 attacks on the U.S. embassies in Kenya and Tanzania. In particular, diffusion of the parameters of the conflict diamond network could have allowed investigators to tie Al Qaeda fund raising activities to a Belgian bank account that contained approximately 20 million dollars of profits from conflict diamonds.
  5. Validation: Having linked Al Qaeda, Hezbollah, and multiple organized crime groups to the trade in conflict diamonds smuggled into Europe from Sierra Leone via Liberia, investigators would have been able to draw operational implications from the evidence amassed in the previous steps of the analytical process. For example, Al Qaeda diamond purchasing behavior changed markedly. Prior to July 2001 Al Qaeda operatives sought to buy low in Africa and sell high in Europe so as to maximize profit. Around July they shifted to a strategy of buying all the diamonds they could and offering the highest prices required to secure the stones. Investigators could have contrasted these buying patterns and hypothesized that Al Qaeda was anticipating events which would disrupt other stores of value, such as financial instruments, as well as bring more scrutiny of Al Qaeda financing in general.
  6. Impacting: In the wake of the 9/11attacks, the hypothesis that Al Qaeda engaged in asset shifting prior to those strikes similar to that undertaken in 1999 has gained significant validity. During this final step in the analytical process, investigators could have created a watch point involving a terror-crime nexus associated with conflict diamonds in West Africa, and generated the following indicators for use in future investigations:
  • Financial movements and expenditures as attack precursors;
  • Money as a link between known and unknown nodes;
  • Changes in the predominant patterns of financial activity;
  • Criminal activities of a terrorist cell for direct or indirect operational support;
  • Surge in suspicious activity reports.

A.2. The tool space

The key to successful tool application is understanding what type of tool is needed for the task at hand. In order to better characterize the tools for this study, we have divided the tool space into three dimensions:

  • An abstraction dimension: This continuum focuses on tools that support the movement of concepts from the concrete to the abstract. Building models is an excellent example of moving concrete, narrow concepts to a level of abstraction that can be used by investigators to make sense of the past and predict the future.
  • A codification dimension: This continuum attaches labels to concepts that are recognized and accepted by the analytical community to provide a common context for grounding models. One end of the spectrum is the local labels that individual investigators assign and perhaps only that they understand. The other end of the spectrum is the community-accepted labels (e.g., commonly accepted definitions that will be understood by the broader analytical community). As we saw earlier, concepts must be defined in community-recognizable labels before the community can begin to cooperate on those concepts.
  • The number of actors: This last continuum talks in term of the number of actors who are involved with a given concept within a certain time frame. Actors could include individual people, groups, and even automated software agents. Understanding the number of actors involved with the analysis will play a key role in determining what type of tool needs to be employed.

Although they may appear to be performing the same function, abstraction and codification are not the same. An investigator could build a set of models (moving from concrete to abstract concepts) but not take the step of changing his or her local labels. The result would be an abstracted model of use to the single investigator, but not to a community working from a different context. For example, one investigator could model a credit card theft ring as a petty crime network under the loose control of a traditional organized crime family, while another investigator could model the same group as a terrorist logistic sup- port cell.

The analytical process described above can now be mapped into the three-dimensional tool space, represented graphically in Figure A-1. So, for example, scanning (step 1) is placed in the portion of the tool space that represents an individual working in concrete terms without those terms being highly codified (e.g., queries). Validation (step 5), on the other hand, requires the cooperation of a larger group working with abstract, highly codified concepts.

A.2.1. Scanning tools

Investigators responsible for constructing and monitoring a set of indicators could begin by scanning available data sources – including classified databases, unclassified archives, news archives, and internet sites – for information related to the indicators of interest. As can be seen from exhibit 6, all scanning tools will need to support requirements dictated by where these tools fall within the tool space. Scanning tools should focus on:

  • How to support an individual investigator as opposed to the collective analytical community. Investigators, for the most part, will not be performing these scanning functions as a collaborative effort;
  • Uncoded concepts, since the investigator is scanning for information that is directly related to a specific context (e.g., money laundering), then the investigator will need to be intimately familiar with the terms that are local (uncoded) to that context;
  • Concrete concepts or, in this case, specific examples of people, groups, and circumstances within the investigator’s local context. In other words, if the investigator attempts to generalize at this stage, much could be missed.

Using these criteria as a background, and leveraging state-of-the-art definitions for data mining, scanning tools fall into two basic categories:

  • Tools that support subject-based queries are used by investigators when they are searching for specific information about people, groups, places, events, etc.; and
  • Investigators who are not as interested in individuals as they are in identifying patterns of activities use tools that support pattern-based queries.

This section briefly describes the functionality in general, as well as providing specific tool examples, to support both of these critical types of scanning.

A.2.1.1. Subject-based queries

Subject-based queries are the easiest to perform and the most popular. Examples of tools that are used to support subject-based queries are Boolean search tools for databases and internet search engines.

Functionalities that should be evaluated when selecting subject-based query tools include that they are easy to use and intuitive to the investigator. Investigators should not be faced with a bewildering array of ‘ifs’, ‘ands’, and ‘ors’, but should be presented with a query interface that matches the investigator’s cognitive view of searching the data. The ideal is a natural language interface for constructing the queries. An- other benefit is that they provide a graphical interface whenever possible. One example might be a graphical interface that allows the investigator to define subjects of interest, then uses overlapping circles to indicate the interdependencies among the search terms. Furthermore, query interfaces should support synonyms, have an ability to ‘learn’ from the investigator based on specific interests, and create an archive of queries so that the investigator can return and repeat. Finally, they should provide a profiling capability that alerts the investigator when new information is found based on the subject.

Subject-based query tools fall into three categories: queries against databases, internet searches, and customized search tools. Examples of tools for each of these categories include:

  • Queries from news archives: All major news groups provide web-based interfaces that support queries against their on-line data sources. Most allow you to select the subject, enter keywords, specify date ranges, and so on. Examples include the New York Times (at http://www.nytimes.com/ref/membercenter/nytarchive.html) and the Washington Post (at http://pqasb.pqarchiver.com/washingtonpost/search.html). Most of these sources allow you to read through the current issue, but charge a subscription for retrieving articles from past issues.
  • Queries from on-line references: There are a host of on-line references now available for query that range from the Encyclopedia Britannica (at http://www.eb.com/) to the CIA’s World Factbook (at http://www.cia.gov/cia/publications/factbook/). A complete list of such references is impossible to include, but the search capabilities provided by each are clear examples of subject-based queries.
  • Search engines: Just as with queries against databases, there are a host of commercial search engines available for free-format internet searching. The most popular is Google, which combines a technique called citation indexing with web crawlers that constantly search out and index new web pages. Google broke the mold of free-format text searching by not focusing on exact matches between the search terms and the retrieved information. Rather, Google assumes that the most popular pages (the ones that are referenced the most often) that include your search terms will be the pages of greatest interest to you. The commercial version of Google is available free of charge on the internet, and organizations can also purchase a version of Google for indexing pages on an intranet. Google also works in many languages. More information about Google as a business solution can be found at http://www.google.com/services/. Although the current version of Google supports many of the requirements for subject-based queries, its focus is quick search and it does not support sophisticated query interfaces, natural language queries, synonyms, or a managed query environment where queries can be saved. There are now numerous software packages available that provide this level of support, many of them as add-on packages to existing applications.

o Name Search®: This software enables applications to find, identify and match information. Specifically, Name Search finds and matches records based on personal and corporate names, social security numbers, street addresses and phone numbers even when those records have variations due to phonetics, missing words, noise words, nicknames, prefixes, keyboard errors or sequence variations. Name Search claims that searches using their rule-based matching algorithms are faster and more accurate than those based only on Soundex or similar techniques. Soundex, developed by Odell and Russell, uses codes based on the sound of each letter to translate a string into a canonical form of at most four characters, preserving the first letter.

Name Search also supports foreign languages, technical data, medical information, and other specialized information. Other problem-specific packages take advantage of the Name Search functionality through an Application Programming Interface (API) (i.e., Name Search is bundled). An example is ISTwatch. See http://www.search-software.com/.

o ISTwatch©: ISTwatch is a software component suite that was designed specifically to search and match individuals against the Office of Foreign Assets Control’s (OFAC’s) Specially Designated Nationals list and other denied parties lists. These include the FBI’s Most Wanted, Canadian’s OSFI terrorist lists, the Bank of England’s consolidated lists and Financial Action Task Force data on money-laundering countries. See

http://www.intelligentsearch.com/ofac_software/index.html

All these tools are packages designed to be included in an application. A final set of subject-based query tools focus on customized search environments. These are tools that have been customized to per- form a particular task or operate within a particular context. One example is WebFountain.

o WebFountain: IBM’s WebFountain began as a research project focused on extending subject- based query techniques beyond free format text to target money-laundering activities identified through web sources. The WebFountain project, a product of IBM’s Almaden research facility in California, used advanced natural language processing technologies to analyze the entire internet – the search covered 256 terabytes of data in the process of matching a structured list of people who were indicted for money laundering activities in the past with unstructured in- formation on the internet. If a suspicious transaction is identified and the internet analysis finds a relationship between the person attempting the transaction and someone on the list, then an alert is issued. WebFountain has now been turned into a commercially available IBM product. Robert Carlson, IBM WebFountain vice president, describes the current content set as over 1 petabyte in storage with over three billion pages indexed, two billion stored, and the ability to mine 20 million pages a day. The commercial system also works across multiple languages. Carlson stated in 2003 that it would cover 21 languages by the end of 2004 [Quint, 2003]. See: http://www.almaden.ibm.com/webfountain

o Memex: Memex is a suite of tools that was created specifically for law enforcement and national security groups. The focus of these tools is to provide integrated search capabilities against both structured (i.e., databases) and unstructured (i.e., documents) data sources. Memex also provides a graphical representation of the process the investigator is following, structuring the subject-based queries. Memex’s marketing literature states that over 30 percent of the intelligence user population of the UK uses Memex. Customers include the Metropolitan Police Service (MPS), whose Memex network that includes over 90 dedicated intelligence servers pro- viding access to over 30,000 officers; the U.S. Department of Defense; numerous U.S. intelligence agencies, drug intelligence Groups and law enforcement agencies. See http://www.memex.com/index.shtml.

A.2.1.2. Pattern queries

Pattern-based queries focus on supporting automated knowledge discovery (1) where the exact subject of interest is not known in advance and (2) where what is of interest is a pattern of activity emerging over time. In order for pattern queries to be formed, the investigator must hypothesize about the patterns in advance and then use tools to confirm or deny these hypotheses. This approach is useful when there is expertise available to make reasonable guesses with respect to the potential patterns. Conversely, when that expertise is not available or the potential patterns are unknown due to extenuating circumstances (e.g., new patterns are emerging too quickly for investigators to formulate hypotheses), then investigators can auto- mate the construction of candidate patterns by formulating a set of rules that describe how potentially interesting, emerging patterns might appear. In either case, tools can help support the production and execution of the pattern queries. The degree of automation is dependent upon the expertise available and the dynamics of the situation being investigated.

As indicated earlier, pattern-based query tools fall into two general categories: those that support investigators in the construction of patterns based on their expertise, then run those patterns against large data sets, and those that allow the investigator to build rules about patterns of interest and, again, run those rules against large data sets.

Examples of tools for each of these categories include

  1. Megaputer (PolyAnalyst 4.6): This tool falls into the first category of pattern-based query tools, helping the investigator hypothesize patterns and explore the data based on those hypotheses. PolyAnalyst is a tool that supports a particular type of pattern-based query called Online Analytical Processing (OLAP), a popular analytical approach for large amounts of quantitative data. Using PolyAnalyst, the investigator defines dimensions of interest to be considered in text exploration and then displays the results of the analysis across various combinations of these dimensions. For example, an investigator could search for mujahideen who had trained at the same Al Qaeda camp in the 1990s and who had links to Pakistani Intelligence as well as opium growers and smuggling networks into Europe. See http://www.megaputer.com/.
  2. Autonomy Suite: Autonomy’s search capabilities fall into the second category of pattern-based query tools. Autonomy has combined technologies that employ adaptive pattern-matching techniques with Bayesian inference and Claude Shannon’s principles of information theory. Autonomy identifies the pat- terns that naturally occur in text, based on the usage and frequency of words or terms that correspond to specific ideas or concepts as defined by the investigator. Based on the preponderance of one pattern over another in a piece of unstructured information, Autonomy calculates the probability that a document in question is about a subject of interest [Autonomy, 2002]. See http://www.autonomy.com/content/home/
  3. Fraud Investigator Enterprise: The Fraud Investigator Enterprise Similarity Search Engine (SSE) from InfoGlide Software is another example of the second category of pattern search tools. SSE uses ana- lytic techniques that dissect data values looking for and quantifying partial matches in addition to exact matches. SSE scores and orders search results based upon a user-defined data model. See http://www.infoglide.com/composite/ProductsF_2_1.htm

Although an evaluation of data sources available for scanning is beyond the scope of this paper, one will serve as an example of the information available. It is hypothesized in this report that tools could be developed to support the search and analysis of Short Message Service (SMS) traffic for confirmation of PIE indicators. Often referred to as ‘text messaging’ in the U.S., the SMS is an integrated message service that lets GSM cellular subscribers send and receive data using their handset. A single short message can be up to 160 characters of text in length – words, numbers, or punctuation symbols. SMS is a store and for- ward service; this means that messages are not sent directly to the recipient but via a network SMS Center. This enables messages to be delivered to the recipient if their phone is not switched on or if they are out of a coverage area at the time the message was sent. This process, called asynchronous messaging, operates in much the same way as email. Confirmation of message delivery is another feature and means the sender can receive a return message notifying them whether the short message has been delivered or not. SMS messages can be sent to and received from any GSM phone, providing the recipient’s network supports text messaging. Text messaging is available to all mobile users and provides both consumers and business people with a discreet way of sending and receiving information
Over 15 billion SMS text messages were sent around the globe in January 2001. Tools taking advantage of the stored messages in an SMS Center could:

  • Perform searches of the text messages for keywords or phrases,
  • Analyze SMS traffic patterns, and
  • Search for people of interest in the Home Location Register (HLR) database that maintains information about the subscription profile of the mobile phone and also about the routing information for the subscriber.

A.2.2. Codification tools

As can be seen from exhibit 6, all codification tools will need to support requirements dictated by where these tools fall within the tool space. Codification tools should focus on:

  • Supporting individual investigators (or at best a small group of investigators) in making sense of the information discovered during the scanning process.
  • Moving the terms with which the information is referenced from a localized organizational context (uncoded, e.g., hawala banking) to a more global context (codified, e.g., informal value storage and transfer operations).
  • Moving that information from specific, concrete examples towards more abstract terms that could support identification of concepts and patterns across multiple situations, thus providing a larger context for the concepts being explored.

Using these criteria as a background, the codification tools reviewed fall into two major categories:

  1. Tools that help investigators label concepts and cluster different concepts into terms that are recognizable and used by the larger analytical community; and
  2. Tools that use this information to build up network maps identifying entities, relationships, missions, etc.

This section briefly describes codification functionality in general, as well as providing specific tool examples, to support both of these types of codification.

A.2.2.1. Labeling and clustering

The first step to codification is to map the context-specific terms used by individual investigators to a taxonomy of terms that are commonly accepted in a wider analytical context. This process is performed through labeling individual terms, clustering other terms and renaming them according to a community- accepted taxonomy.

In general, labeling and clustering tools should:

  • Support the capture of taxonomies that are being developed by the broader analytical community; Allow the easy mapping of local terms to these broader terms;
    Support the clustering process either by providing algorithms for calculating the similarity between concepts, or tools that enable collaborative consensus construction of clustered concepts;
  • Label and cluster functionality is typically embedded in applications support analytical processes, not provided separately as stand-alone tools.

Two examples of such products include:

COPLINK® – COPLINK began as a research project at the University of Arizona and has now grown into a commercially available application from Knowledge Computing Corporation (KCC). It is focused on providing tools for organizing vast quantities of structured and seemingly unrelated information in the law enforcement arena. See COPLINK’s commercial website at http://www.knowledgecc.com/index.htm and its academic website at the University of Arizona at http://ai.bpa.arizona.edu/COPLINK/.

Megaputer (PolyAnalyst 4.6) – In addition to supporting pattern queries, PolyAnalyst also pro- vides a means for creating, importing and managing taxonomies which could be useful in the codification step and carries out automated categorization of text records against existing taxonomies.

A.2.2.2. Network mapping

Terrorists have a vested interest in concealing their relationships, they often emit confusing or intentionally misleading information and they operate in self-contained and difficult to penetrate cells for much of the time. Criminal networks are also notoriously difficult to map, and the mapping often happens after a crime has been committed than before. What is needed are tools and approaches that support the map- ping of networks to represent agents (e.g., people, groups), environments, behaviors, and the relationships between all of these.

A large number of research efforts and some commercial products have been created to automate aspects of network mapping in general and link analysis specifically. In the past, however, these tools have provided only marginal utility in understanding either criminal or terrorist behavior (as opposed to espionage networks, for which this type of tool was initially developed). Often the linkages constructed by such tools are impossible to disentangle since all links have the same importance. PIE holds the potential to focus link analysis tools by clearly delineating watch points and allowing investigators to differentiate, characterize and prioritize links within an asymmetric threat network. This section focuses on the requirements dictated by PIE and some candidate tools that might be used in the PIE context.

In general, network mapping tools should:

  • Support the representation of people, groups, and the links between them within the PIE indicator framework;
  • Sustain flexibility for mapping different network structures;
  • Differentiate, characterize and prioritize links within an asymmetric threat network;
  • Focus on organizational structures to determine what kinds of network structures they use;
  • Provide a graphical interface that supports analysis;
  • Access and associate evidence with an investigator’s data sources.

Within the PIE context, investigators can use network mapping tools to identify the flows of information and authority within different types of network forms such as chains, hub and spoke, fully matrixed, and various hybrids of these three basic forms.
Examples of network mapping tools that are available commercially include:

Analyst Notebook®: A PC-based package from i2 that supports network mapping/link analysis via network, timeline and transaction analysis. Analyst Notebook allows an investigator to capture link information between people, groups, activities, and other entities of interest in a visual format convenient for identifying relationships, dependencies and trends. Analyst Notebook facilitates this capture by providing a variety of tools to review and integrate information from a number of data sources. It also allows the investigator to make a connection between the graphical icons representing entities and the original data sources, supporting a drill-down feature. Some of the other useful features included with Analyst Note- book are the ability to: 1) automatically order and depict sequences of events even when exact date and time data is unknown and 2) use background visuals such as maps, floor plans or watermarks to place chart information in context or label for security purposes. See http://www.i2.co.uk/Products/Analysts_Notebook/default.asp. Even though i2 Analyst Notebook is widely used by intelligence community, anti-terrorism and law enforcement investigators for constructing network maps, interviews with investigators indicate that it is more useful as a visual aid for briefing rather than in performing the analysis itself. Although some investigators indicated that they use it as an analytical tool, most seem to perform the analysis using either another tool or by hand, then entering the results into the Analyst Notebook in order to generate a graphic for a report or briefing. Finally, few tools are available within the Analyst Notebook to automatically differentiate, characterize and prioritize links within an asymmetric threat network.

Patterntracer TCA: Patterntracer Telephone Call Analysis (TCA) is an add-on tool for the Analyst Notebook intended to help identify patterns in telephone billing data. Patterntracer TCA automatically finds repeating call patterns in telephone billing data and graphically displays them using network and timeline charts. See http://www.i2.co.uk/Products/Analysts_Workstation/default.asp

Memex: Memex has already been discussed in the context of subject-based query tools. In addition to supporting such queries, however, Memex also provides a tool that supports automated link analysis on unstructured data and presents the results in graphical form.

Megaputer (PolyAnalyst 4.6): In addition to supporting pattern-based queries, PolyAnalyst was also designed to support a primitive form of link analysis, by providing a visual relationship of the results.

A.2.3. Abstraction tools

As can be seen from exhibit 6, all abstraction tools will need to support requirements dictated by where these tools fall within the tool space. Abstraction tools should focus on:

  • Functionalities that help individual investigators (or a small group of investigators) build abstract models;
  • Options to help share these models, and therefore the tools should be defined using terms that will be recognized by the larger community (i.e., codified as opposed to uncoded);
  • Highly abstract notions that encourage examination of concepts across networks, groups, and time.

The product of these tools should be hypotheses or models that can be shared with the community to support information exchange, encourage dialogue, and eventually be validated against both real-world data and by other experts. This section provides some examples of useful functionality that should be included in tools to support the abstraction process.

A.2.3.1. Structured argumentation tools

Structured argumentation is a methodology for capturing analytical reasoning processes designed to address a specific analytic task in a series of alternative constructs, or hypotheses, represented by a set of hierarchical indicators and associated evidence. Structured argumentation tools should:

  • Capture multiple, competing hypotheses of multi-dimensional indicators at both summary and/or detailed levels of granularity;
  • Develop and archive indicators and supporting evidence;
  • Monitor ongoing activities and assess the implications of new evidence;
  • Provide graphical visualizations of arguments and associated evidence;
  • Encourage a careful analysis by reminding the investigator of the full spectrum of indicators to be considered;
  • Ease argument comprehension by allowing the investigator to move along the component lines of reasoning to discover the basis and rationale of others’ arguments;
  • Invite and facilitate argument comparison by framing arguments within common structures; and
  • Support collaborative development and reuse of models among a community of investigators.
  • Within the PIE context, investigators can use structured argumentation tools to assess a terrorist group’s ability to weaponize biological materials, and determine the parameters of a transnational criminal organization’s money laundering methodology.

Examples of structured argumentation tools that are available commercially include:

Structured Evidential Argument System (SEAS) from SRI International was initially applied to the problem of early warning for project management, and more recently to the problem of early crisis warning for the U.S. intelligence and policy communities. SEAS is based on the concept of a structured argument, which is a hierarchically organized set of questions (i.e., a tree structure). These are multiple-choice questions, with the different answers corresponding to discrete points or subintervals along a continuous scale, with one end of the scale representing strong support for a particular type of opportunity or threat and the other end representing strong refutation. Leaf nodes represent primitive questions, and internal nodes represent derivative questions. The links represent support relationships among the questions. A derivative question is supported by all the derivative and primitive questions below it. SEAS arguments move concepts from their concrete, local representations into a global context that supports PIE indicator construction. See http://www.ai.sri.com/~seas/.

A.2.3.2. Modeling

  • By capturing information about a situation (e.g., the actors, possible actions, influences on those actions, etc.), in a model, users can define a set of initial conditions, match these against the model, and use the results to support analysis and prediction. This process can either be performed manually or, if the model is complex, using an automated tool or simulator.
  • Utilizing modeling tools, investigators can systematically examine aspects of terror-crime interaction. Process models in particular can reveal linkages between the two groups and allow investigators to map these linkages to locations on the terror-crime interaction spectrum. Process models capture the dynamics of networks in a series of functional and temporal steps. Depending on the process being modeled, these steps must be conducted either sequentially or simultaneously in order for the process to execute as de- signed. For example, delivery of cocaine from South America to the U.S. can be modeled as process that moves sequentially from the growth and harvesting of coca leaves through refinement into cocaine and then transshipment via intermediate countries into U.S. distribution points. Some of these steps are sequential (e.g., certain chemicals must be acquired and laboratories established before the coca leaves can be processed in bulk) and some can be conducted simultaneously (e.g., multiple smuggling routes can be utilized at the same time).

Corruption, modeled as a process, should reveal useful indicators of cooperation between organized crime and terrorism. For example, one way to generate and validate indicators of terror-crime interaction is to place cases of corrupt government officials or private sector individuals in an organizational network construct utilizing a process model and determine if they serve as a common link between terrorist and criminal networks via an intent model with attached evidence. An intent model is a type of process model constructed by reverse engineering a specific end-state, such as the ability to move goods and people into and out of a country without interference from law enforcement agencies.

This end-state is reached by bribing certain key officials in groups that supply border guards, provide legitimate import-export documents (e.g., end-user certificates), monitor immigration flows, etc.

Depending on organizational details, a bribery campaign can proceed sequentially or simultaneously through various offices and individuals. This type of model allows analysts to ‘follow the money’ through a corruption network and link payments to officials with illicit sources. The model can be set up to reveal payments to officials that can be linked to both criminal and terrorist involvement (perhaps via individuals or small groups with known links to both types of network).

Thus investigators can use a process model as a repository for numerous disparate data items that, taken together, reveal common patterns of corruption or sources of payments that can serve as indicators of cooperation between organized crime and terrorism. Using these tools, investigators can explore multiple data dimensions by dynamically manipulating several elements of analysis:

  • Criminal and/or terrorist priorities, intent and factor attributes;
  • Characterization and importance of direct evidence;
  • Graphical representations and other multi-dimensional data visualization approaches.

There have been a large number of models built over the last several years focusing on counter- terrorism and criminal activities. Some of the most promising are models that support agent-based execution of complex adaptive environments that are used for intelligence analysis and training. Some of the most sophisticated are now being developed to support the generation of more realistic environments and interactions for the commercial gaming market.

In general, modeling tools should:

  • Capture and present reasoning from evidence to conclusion;
  • Enable comparison of information across situation, time, and groups;
  • Provide a framework for challenging assumptions and exploring alternative hypotheses;
  • Facilitate information sharing and cooperation by representing hypotheses and analytical judgment, not just facts;
  • Incorporate the first principle of analysis—problem decomposition;
  • Track ongoing and evolving situations, collect analysis, and enable users to discover information and critical data relationships;
  • Make rigorous option space analysis possible in a distributed electronic context;
  • Warn users of potential cognitive bias inherent in analysis.

Although there are too many of these tools to list in this report, good examples of some that would be useful to support PIE include:

NETEST: This model estimates the size and shape of covert networks given multiple sources with omissions and errors. NETEST makes use of Bayesian updating techniques, communications theory and social network theory [Dombroski, 2002].

The Modeling, Virtual Environments and Simulation (MOVES) Institute at the Naval Postgraduate School in Monterey, California, is using a model of cognition formulated by Aaron T. Beck to build models capturing the characteristics of people willing to employ violence [Beck, 2002].

BIOWAR: This is a city scale multi-agent model of weaponized bioterrorist attacks for intelligence and training. At present the model is running with 100,000 agents (this number will be increased). All agents have real social networks and the model contains real city data (hospitals, schools, etc.). Agents are as realistic as possible and contain a cognitive model [Carley, 2003a].

All of the models reviewed had similar capabilities:

  • Capture the characteristics of entities – people, places, groups, etc.;
  • Capture the relationships between entities at a level of detail that supports programmatic construction of processes, situations, actions, etc. these are usually “is a” and “a part of” representations of object-oriented taxonomies, influence relationships, time relationships, etc.;
  • The ability to represent this information in a format that supports using the model in simulations. The next section provides information on simulation tools that are in common use for running these types of models.
  • User interfaces for defining the models, the best being graphical interfaces that allow the user to define the entities and their relationships through intuitive visual displays. For example, if the model involves defining networks or influences between entities, graphical displays with the ability to create connections and perform drag and drop actions become important.

A.2.4. Diffusion tools

As can be seen from exhibit 6, all diffusion tools will need to support requirements dictated by where these tools fall within the tool space. Diffusion tools should focus on:

  • Moving information from an individual or small group of investigators to the collective community;
  • Providing abstract concepts that are easily understood in a global context with little worry that the terms will be misinterpreted;
  • Supporting the representation of abstract concepts and encouraging dialogues about those concepts.

In general diffusion tools should:

  • Provide a shared environment that investigators can access on the internet;
  • Support the ability for everyone to upload abstract concepts and their supporting evidence (e.g., documents);
  • Contain the ability for the person uploading the information to be able to attach an annotation and keywords;
  • Possess the ability to search concept repositories;
  • Be simple to set up and use.

Within the PIE context, investigators could use diffusion tools to:

  • Employ a collaborative environment to exchange information, results of analysis, hypotheses, models, etc.;
  • Utilize collaborative environments that might be set up between law enforcement groups and counterterrorism groups to exchange information on a continual and near real-time basis. Examples of diffusion tools run from one end of the cooperation/dissemination spectrum to the other. One of the simplest to use is:
  • AskSam: The AskSam Web Publisher is an extension of the standalone AskSam capability that has been used by the analytical community for many years. The capabilities of AskSam Web Publisher include: 1) sharing documents with others who have access to the local net- work, 2) anyone who has access to the network has access to the AskSam archive without the need for an expensive license, and 3) advanced searching capabilities including adding keywords which supports a group’s codification process (see step 2 in exhibit 6 in our analytical process). See http://www.asksam.com/.

There are some significant disadvantages to using AskSam as a cooperation environment. For example, each document included has to be ‘published’. The assumption is that there are only one or two people primarily responsible for posting documents and these people control all documents that are made available, a poor assumption for an analytical community where all are potential publishers of concepts. The result is expensive licenses for publishers. Finally, there is no web-based service for AskSam, requiring each organization to host its own AskSam server.

There are two leading commercial tools for cooperation now available and widely used. Which tool is chosen for a task depends on the scope of the task and the number of users.

  • Groove: virtual office software that allows small teams of people to work together securely over a network on a constrained problem. Groove capabilities include: 1) the ability for investigators to set up a shared space, invite people to join and give them permission to post documents to a document repository (i.e., file sharing), 2) security including encryption that protects content (e.g., upload and download of documents) and communications (e.g., email and text messaging), investigators can work across firewalls without a Virtual Private Network (VPN) which improves speed and makes it accessible from outside of an intranet, 4) investigators are able to work off-line, then synchronize when they come back on line, 5) includes add- in tools to support cooperation such as calendars, email, text- and voice-based instant messaging, and project management.

Although Groove satisfies most of the basic requirements listed for this category, there are several drawbacks to using Groove for large projects. For example, there is no free format search for text documents and investigators cannot add on their own keyword categories or attributes to the stored documents. This limits Groove’s usefulness as an information exchange archive. In addition, Groove is a fat client, peer-to-peer architecture. This means that all participants are required to purchase a license, download and install Groove on their individual machines. It also means that Groove requires high bandwidth for the information exchange portion of the peer-to-peer updates. See http://www.groove.net/default.cfm?pagename=Workspace.

  • SharePoint: Allows teams of people to work together on documents, tasks, contacts, events, and other information. SharePoint capabilities include: 1) text document loading and sharing, 2) free format search capability, 3) cooperation tools to include instant messaging, email and a group calendar, and 4) security with individual and group level access control. The TraCCC

team employed SharePoint for this project to facilitate distributed research and document

generation. See http://www.microsoft.com/sharepoint/.
SharePoint has many of the same features as Groove, but there are fundamental underlying differences. Sharepoint’s architecture is server based with the client running in a web browser. One ad- vantage to this approach is that each investigator is not required to download a personal version on a machine (Groove requires 60-80MB of space on each machine). In fact, an investigator can access the SharePoint space from any machine (e.g., at an airport). The disadvantage of this approach is that the investigator does not have a local version of the SharePoint information and is unable to work offline. With Groove, an investigator can work offline, and then resynchronize with the remaining members of the group when the network once again becomes available. Finally, since peer-to-peer updates are not taking place, SharePoint does not necessarily require a high-speed internet access, except perhaps in the case where the investigator would like to upload large documents.

Another significant difference between SharePoint and Groove is linked to the search function. In Groove, the search capability is limited to information that is typed into Groove directly, not to documents that have been attached to Groove in an archive. A SharePoint support not only document searches, but also allows the community of investigators to set up their own keyword categories to help with the codification of the shared documents (again see step 2 from exhibit 6). It should be noted, however, that SharePoint only supports searches for Microsoft documents (e.g., Word, Power- Point, etc.) and not ‘foreign’ document formats such as PDF. This fact is not surprising given that SharePoint is a Microsoft tool.

SharePoint and Groove are commercially available cooperation solutions. There are also a wide variety of customized cooperation environments now appearing on the market. For example:

  • WAVE Enterprise Information Integration System– Modus Operandi’s Wide Area Virtual Environment (WAVE) provides tools to support real-time enterprise information integration, cooperation and performance management. WAVE capabilities include: 1) collaborative workspaces for team-based information sharing, 2) security for controlled sharing of information, 3) an extensible enterprise knowledge model that organizes and manages all enterprise knowledge assets, 4) dynamic integration of legacy data sources and commercial off-the-shelf (COtS) tools, 5) document version control, 6) cooperation tools, including discussions, issues, action items, search, and reports, and 7) performance metrics. WAVE is not a COtS solution, however. An organization must work with Modus Operandi services to set up a custom environment. The main disadvantage to this approach as opposed to Groove or SharePoint is cost and the sharing of information across groups. See http://www.modusoperandi.com/wave.htm.

Finally, many of the tools previously discussed have add-ons available for extending their functionality to a group. For example:

  • iBase4: i2’s Analyst Notebook can be integrated with iBase4, an application that allows investigators to create multi-user databases for developing, updating, and sharing the source information being used to create network maps. It even includes security to restrict access or functionality by user, user groups and data fields. It is not clear from the literature, but it appears that this functionality is restricted to the source data and not the sharing of network maps generated by the investigators. See http://www.i2.co.uk/Products/iBase/default.asp

The main disadvantage of iBase4 is its proprietary format. This limitation might be somewhat mitigated by coupling iBase4 with i2’s iBridge product which creates a live connection between legacy databases, but there is no evidence in the literature that i2 has made this integration.

A.2.5. Validation tools

As can be seen from exhibit 6, all validation tools will need to support requirements dictated by where these tools fall within the tool space. Validation tools should focus on:

  • Providing a community context for validating the concepts put forward by the individual participants in the community;
  • Continuing to work within a codified realm in order to facilitate communication between different groups articulating different perspectives;
  • Matching abstract concepts against real world data (or expert opinion) to determine the validity of the concepts being put forward.

Using these criteria as background, one of the most useful toolsets available for validation are simulation tools. This section briefly describes the functionality in general, as well as providing specific tool examples, to support simulations that ‘kick the tires’ of the abstract concepts.

Following are some key capabilities that any simulation tool must possess:

  • Ability to ingest the model information that has been constructed in the previous steps in the

analytical process;

  • Access to a data source for information that might be required by the model during execution;
  • Users need to be able to define the initial conditions against which the model will be run;
  • The more useful simulators allow the user to “step through” the model execution, examining

variables and resetting variable values in mid-execution;

  • Ability to print out step-by-step interim execution results and final results;
  • Change the initial conditions and compare the results against prior runs.

Although there are many simulation tools available, following are brief descriptions of some of the most promising:

  • Online iLink: An optional application for i2’s Analyst Notebook that supports dynamic up- date of Analyst Notebook information from online data sources. Once a connection is made with an on-line source (e.g., LexisNexistM, or D&B®) Analyst Notebook uses this connection to automatically check for any updated information and propagates those updates throughout to support validation of the network map information. See http://www.i2inc.com.

One apparent drawback with this plug-in is that Online iLink appears to require that the line data provider deploy i2’s visualization technology.

  • NETEST: A research project from Carnegie Mellon University, which is developing tools

that combine multi-agent technology with hierarchical Bayesian inference models and biased net models to produce accurate posterior representations of terrorist networks. Bayesian inference models produce representations of a network’s structure and informant accuracy by combining prior network and accuracy data with informant perceptions of a network. Biased net theory examines and captures the biases that may exist in a specific network or set of net- works. Using NETEST, an investigator can estimate a network’s size, determine its member- ship and structure, determine areas of the network where data is missing, perform cost/benefit analysis of additional information, assess group level capabilities embedded in the network, and pose “what if” scenarios to destabilize a network and predict its evolution over time [Dombroski, 2002].

  • REcursive Porous Agent Simulation toolkit (REPAST): A good example of the free, open-source toolkits available for creating agent-based simulations. Begun by the University of Chicago’s social sciences research community and later maintained by groups such as Argonne National Laboratory, Repast is now managed by the non-profit volunteer Repast Organization for Architecture and Development (ROAD). Some of Repast’s features include: 1) a variety of agent templates and examples (however, the toolkit gives users complete flexibility as to how they specify the properties and behaviors of agents), 2) a fully concurrent discrete event scheduler (this scheduler supports both sequential and parallel discrete event operations), 3) built-in simulation results logging and graphing tools, 4) an automated Monte Carlo simulation framework, 5) allows users to dynamically access and modify agent properties, agent behavioral equations, and model properties at run time, 6) includes libraries for genetic algorithms, neural networks, random number generation, and specialized mathematics, and 7) built-in systems dynamics modeling.

More to the point for this investigation, Repast has social network modeling support tools. The Repast website claims that “Repast is at the moment the most suitable simulation framework for the applied modeling of social interventions based on theories and data,” [Tobias, 2003]. See http://repast.sourceforge.net/.

A.2.6. Impacting tools

As can be seen from exhibit 6, all impacting tools will need to support requirements dictated by where these tools fall within the tool space. Impacting tools should focus on:

  • Helping law enforcement and intelligence practitioners understand the implications of their validated models. For example, what portions of the terror-crime interaction spectrum are relevant in various parts of the world, and what is the likely evolutionary path of this phenomenon in each specific geographic area?

Support for translating abstracted knowledge into more concrete local execution strategies. The information flows feeding the scanning process, for example, should be updated based on the results of mapping local events and individuals to the terror-crime interaction spectrum. Watch points and their associated indicators should be reviewed, updated and modified. Probes can be constructed to clarify remaining uncertainties in specific situations or locations.

The following general requirements have been identified for impacting tools:

  • Probe management software to help law enforcement investigators and intelligence community analysts plan probes against known and suspected transnational threat entities, monitor their execution, map their impact, and analyze the resultant changes to network structure and operations.
  • Situational assessment software that supports transnational threat monitoring and projection. Data fusion and visualization algorithms that portray investigators’ current understanding of the nature and extent of terror-crime interaction, and allow investigators to focus scarce collection and analytical resources on the most threatening regions and networks.

Impacting tools are only just beginning to exit the laboratory, and none of them can be considered ready for operational deployment. This type of functionality, however, is being actively pursued within the U.S. governmental and academic research communities. An example of an impacting tool currently under development is described below:

DyNet – A multi-agent network system designed specifically for assessing destabilization strategies on dynamic networks. A knowledge network (e.g., a hypothesized network resulting from Steps 1 through 5 of Boisot’s I-Space-driven analytical process) is given to DyNet as input. In this case, a knowledge network is defined as an individual’s knowledge about who they know, what resources they have, and what task(s) they are performing. The goal of an investigator using DyNet is to build stable, high performance, adaptive networks with and conduct what-if analysis to identify successful strategies for destabilizing those net- works. Investigators can run sensitivity tests examining how differences in the structure of the covert net- work would impact the overall ability of the network to respond to probe and attacks on constituent nodes. [Carley, 2003b]. See the DyNet website hosted by Carnegie Mellon University at http://www.casos.cs.cmu.edu/projects/DyNet/.

A.3. Overall tool requirements

This appendix provides a high-level overview of PIE tool requirements:

  • Easy to put information into the system and get information out of it. The key to the successful use of many of these tools is the quality of the information that is put into them. User interfaces have to be easy to use, context based, intuitive, and customizable. Otherwise, investigators soon determine that the “care and feeding” of the tool does not justify the end product.
  • Reasonable response time: The response time of the tool needs to match the context. If the tool is being used in an operational setting, then the ability to retrieve results can be time- critical–perhaps a matter of minutes. In other cases, results may not be time-critical and days can be taken to generate results.
  • Training: Some tools, especially those that have not been released as commercial products, may not have substantial training materials and classes available. When making a decision regarding tool selection, the availability and accessibility of training may be critical.

Ability to integrate with the enterprise resources: There are many cases where the utility of the tool will depend on its ability to access and integrate information from the overall enterprise in which the investigator is working. Special-purpose tools that require re-keying of information or labor-intensive conversions of formats should be carefully evaluated to determine the man- power required to support such functions.

  • Support for integration with other tools: Tools that have standard interfaces will act as force multipliers in the overall analytical toolbox. At a minimum, tools should have some sort of a developer’s kit that allows the creation of an API. In the best case, a tool would support some generally accepted integration standard such as web services.
  • Security: Different situations will dictate different security requirements, but in almost all cases some form of security is required. Examples of security include different access levels for different user populations. The ability to be able to track and audit transactions, linking them back to their sources, will also be necessary in many cases.
  • Customizable: Augmenting usability, most tools will need to support some level of customizability (e.g., customizable reporting templates).
  • Labeling of information: Information that is being gathered and stored will need to be labeled (e.g., for level of sensitivity or credibility).
  • Familiar to the current user base: One characteristic in favor of any tool selected is how well the current user base has accepted it. There could be a great deal of benefit to upgrading existing tools that are already familiar to the users.
  • Heavy emphasis on visualization: To the greatest extent possible, tools should provide the investigator with the ability to display different aspects of the results in a visual manner.
  • Support for cooperation: In many cases, the strength of the analysis is dependent on leveraging cross-disciplinary expertise. Most tools will need to support some sort of cooperation.

A.4. Bibliography and Further Reading

Autonomy technology White Paper, Ref: [WP tECH] 07.02. This and other information documents about Autonomy may be downloaded after registration from http://www.autonomy.com/content/downloads/

Beck, Aaron T., “Prisoners of Hate,” Behavior research and therapy, 40, 2002: 209-216. A copy of this article may be found at http://mail.med.upenn.edu/~abeck/prisoners.pdf. Also see Dr. Beck’s website at http://mail.med.upenn.edu/~abeck/ and the MOVES Institute at http://www.movesinstitute.org/.

Boisot, Max and Ron Sanchez, “the Codification-Diffusion-Abstraction Curve in the I-Space,” Economic Organization and Nexus of Rules: Emergence and the Theory of the Firm, a working paper, the Universitat Oberta de Catalunya, Barcelona, Spain, May 2003.

Carley, K. M., D. Fridsma, E. Casman, N. Altman, J. Chang, B. Kaminsky, D. Nave, & Yahja, “BioWar: Scalable Multi-Agent Social and Epidemiological Simulation of Bioterrorism Events” in Proceedings from the NAACSOS Conference, 2003. this document may be found at http://www.casos.ece.cmu.edu/casos_working_paper/carley_2003_biowar.pdf

Carley, Kathleen M., et. al., “Destabilizing Dynamic Covert Networks” in Proceedings of the 8th International Command and Control Research and technology Symposium, 2003. Conference held at the National Defense War College, Washington, DC. This document may be found at http://www.casos.ece.cmu.edu/resources_others/a2c2_carley_2003_destabilizing.pdf

Collier, N., Howe, T., and North, M., “Onward and Upward: the transition to Repast 2.0,” in Proceedings of the First Annual North American Association for Computational Social and Organizational Science Conference, Electronic Proceedings, Pittsburgh, PA, June 2003. Also, read about REPASt 3.0 at the REPASt website: http://repast.sourceforge.net/index.html

DeRosa, Mary, “Data Mining and Data Analysis for Counterterrorism,” CSIS Report, March 2004. this document may be purchased at http://csis.zoovy.com/product/0892064439

Dombroski, M. and K. Carley, “NETEST: Estimating a Terrorist Network’s Structure,” Journal of Computational and Mathematical Organization theory, 8(3), October 2002: 235-241.
http://www .casos.ece.cmu.edu/conference2003/student_paper/Dombroski.pdf

Farah, Douglas, Blood from Stones: The Secret Financial Network of Terror, New York: Broadway Books, 2004.

Hall, P. and G. Dowling, “Approximate string matching,” Computing Surveys, 12(4), 1980: 381-402. For more information on phonetic string matching see http://www.cs.rmit.edu.au/~jz/fulltext/sigir96.pdf. A good summary of the inherent limitations of Soundex may be found at http://www.las-inc.com/soundex/?source=gsx.

Lowrance, J.D., Harrison, I.W., and Rodriguez, A.C., “Structured Argumentation for Analysis,” Proceedings of the 12th Inter- national Conference on Systems Research, Informatics, and Cybernetics, (August 2000).

Quint, Barbara, “IBM’s WebFountain Launched – the Next Big Thing?” September 22, 2003 – from the Information today, Inc. website at http://www.infotoday.com/newsbreaks/nb030922-1.shtml Also see IBM’s WebFountain website at http://www.almaden.ibm.com/webfountain/ and the WebFountain Application Development Guide at
http://www .almaden.ibm.com/webfountain/resources/sg247029.pdf.

Shannon, Claude, “A mathematical theory of communication,” Bell System technical Journal, (27), July and October 1948: 379- 423 and 623-656.

Tobias, R. and C. Hofmann, “Evaluation of Free Java-libraries for Social-scientific Agent Based Simulation,” Journal of Artificial Societies and Social Simulation, University of Surrey, 7(1), January 2003 may be found at http://jasss.soc.surrey.ac.uk/7/1/6.html.

Notes on Activity Based Intelligence Principles and Applications

Activity Based Intelligence Principles and Applications

ABI represents a fundamentally different way of doing intelligence analysis, one that is important in its own terms but that also offers the promise of creatively disrupting what is by now a pretty tired paradigm for thinking about the intelligence process.

ABI enables discovery as a core principle. Discovery—how to do it and what it means—is an exciting challenge, one that the intelligence community is only beginning to confront, and so this book is especially timely.
The prevailing intelligence paradigm is still very linear when the world is not: Set requirements, collect against those requirements, then analyze. Or as one wag put it: “Record, write, print, repeat.”

ABI disrupts that linear collection, exploitation, dissemination cycle of intelligence. It is focused on combining data—any data—where it is found. It does not prize data from secret sources but combines unstructured text, geospatial data, and sensor-collected intelligence. It marked an important passage in intelligence fusion and was the first manual evolution of “big data” analysis by real practitioners. ABI’s initial focus on counterterrorism impelled it to develop patterns of life on individuals by correlating their activities, or events and transactions in time and space.

ABI is based on four fundamental pillars that are distinctly different from other intelligence methods. The first is georeference to discover. Sometimes the only thing data has in common is time and location, but that can be enough to enable discovery of important correlations, not just reporting what happened. The second is sequence neutrality: We may find a critical puzzle piece before we know there is a puzzle. Think how often that occurs in daily life, when you don’t really realize you were puzzled by something until you see the answer.

The third principle is data neutrality. Data is data, and there is no bias toward classified secrets. ABI does not prize exquisite data from intelligence sources over other sources the way that the traditional paradigm does. The fourth principle comes full circle to the first: integrate before exploitation. The data is integrated in time and location so it can be discovered, but that integration happens before any analyst turns to the data.

ABI necessarily has pushed advances in dealing with “big data,” enabling technologies that automate manual workflows, thus letting analysts do what they do best. In particular, to be discoverable, the metadata, like time and location, have to be normalized. That requires techniques for filtering metadata and drawing correlations. It also requires new techniques for visualization, especially geospatial visualization, as well as tools for geotemporal pattern analysis. Automated activity extraction increases the volume of georeferenced data available for analysis.

ABI is also enabled by new algorithms for correlation and fusion, including rapidly evolving advanced modelingand machine learning techniques.

ABI came of age in the fight against terror, but it is an intelligence method that can be extended to other problems—especially those that require identifying the bad guys among the good in areas like counternarcotics or maritime domain awareness. Beyond that, ABI’s emphasis on correlation instead of causation can disrupt all-too-comfortable assumptions. Sure, analysts will find lots of spurious correlations, but they will also find intriguing connections in interesting places, not full-blown warnings but, rather, hints about where to look and new connections to explore.

This textbook describes a revolutionary intelligence analysis methodology using approved, open-source, or commercial examples to introduce the student to the basic principles and applications of activity-based intelligence (ABI).

Preface

Writing about a new field, under the best of circumstances, is a difficult endeavor. This is doubly true when writing about the field of intelligence, which by its nature must operate in the shadows, hidden from the public view. Developments in intelligence, particularly in analytic tradecraft, are veiled in secrecy in order to protect sources and methods;

Activity-Based Intelligence: Principles and Applications is aimed at students of intelligence studies, entry-level analysts, technologists, and senior-level policy makers and executives who need a basic primer on this emergent series of methods. This text is authoritative in the sense that it documents, for the first time, an entire series of difficult concepts and processes used by analysts during the wars in Iraq and Afghanistan to great effect. It also summarizes basic enabling techniques, technologies, and methodologies that have become associated with ABI.

1

Introduction and Motivation

By mid 2014, the community was once again at a crossroads: the dawn of the fourth age of intelligence. This era is dominated by diverse threats, increasing change, and increasing rates of change. This change also includes an explosion of information technology and a convergence of telecommunications, location-aware services, and the Internet with the rise of global mobile computing. Tradecraft for intelligence integration and multi-INT dominates the intelligence profession. New analytic methods for “big data” analysis have been implemented to address the tremendous increase in the volume, velocity, and variety of data sources that must be rapidly and confidently integrated to understand increasingly dynamic and complex situations. Decision makers in an era of streaming real-time information are placing increasing demands on intelligence professionals to anticipate what may happen…against an increasing range of threats amidst an era of declining resources. This textbook is an introduction to the methods and techniques for this new age of intelligence. It leverages what we learned in the previous ages and introduces integrative approaches to information exploitation to improve decision advantage against emergent and evolving threats.

Dynamic Change and Diverse Threats

Transnational criminal organizations, terrorist groups, cyberactors, counterfeiters, and drug lords increasingly blend together; multipolar statecraft is being rapidly replaced by groupcraft.
The impact of this dynamism is dramatic. In the Cold War, intelligence focused on a single nation-state threat coming from a known location. During the Global War on Terror, the community aligned against a general class of threat coming from several known locations, albeit with ambiguous tactics and methods. The fourth age is characterized by increasingly asymmetric, unconventional, unpredictable, proliferating threats menacing and penetrating from multiple vectors, even from within. Gaining a strategic advantage against these diverse threats requires a new approach to collecting and analyzing information.

1.1.2 The Convergence of Technology and the Dawn of Big Data

Information processing and intelligence capabilities are becoming democratized.

In addition to rapidly proliferating intelligence collection capabilities, the fourth age of intelligence coincided with the introduction of the term “big data.” Big data refers to high-volume, high-velocity data that is difficult to process, store, and analyze with traditional information architectures. It is thought that the term was first used in an August 1999 article in Communications of the ACM [16]. The McKinsey Global Institute calls big data “the next frontier for innovation, competition, and productivity” [17]. New technologies like crowdsourcing, data fusion, machine learning, and natural language processing are being used in commercial, civil, and military applications to improve the value of existing data sets and to derive a competitive advantage. A major shift is under way from technologies that simply store and archive data to those that process it—including real-time processing of multiple “streams” of information.

1.1.3 Multi-INT Tradecraft: Visualization, Statistics, and Spatiotemporal Analysis

Today, the most powerful computational techniques are being developed for business
intelligence, high-speed stock trading, and commercial retailing. These are analytic techniques—which intelligence professionals call their “tradecraft”—developed in tandem with the “big data” information explosion. They differ from legacy analysis techniques because they are visual, statistical, and spatial.

The emerging field of visual analytics is “the science of analytical reasoning facilitated by visual interactive interfaces” [20, p. 4]. It recognizes that humans are predisposed to recognize trends and patterns when they are presented using consistent and creative cognitive and perceptual techniques. Technological advances like high-resolution digital displays, powerful graphics cards and graphics processing units, and interactive visualization and human-machine interfaces have changed the way scientists and engineers analyze data. These methods include three-dimensional visualizations, clustering algorithms, data filtering techniques, and the use of color, shape, and motion to rapidly convey large volumes of information.

Next came the fusion of visualization techniques with statistical methods.

Analysts introduced methods for statistical storytelling, where mathematical functions are applied through a series of steps to describe interesting trends, eliminate infeasible alternatives, and discover anomalies so that decision makers can visualize and understand a complex decision space quickly and easily.

Geographic information systems (GISs) and the science of geoinformatics had been used since the late 1960s to display spatial information as maps and charts.

Increasingly, software tools like JMP, Tableau, GeoIQ, MapLarge, and ESRI ArcGIS have included advanced spatial and temporal analysis tools that advance the science of data analysis. The ability to analyze trends and patterns over space and time is called spatiotemporal analysis.

1.1.4 The Need for a New Methodology
The fourth age of intelligence is characterized by the changing nature of threats, the convergence in information technology, and the availability of multi-INT analytic tools—three drivers that create the conditions necessary for a revolution in intelligence tradecraft. This class of methods must address nonstate actors, leverage technological advances, and shift the focus of intelligence from reporting the past to anticipating the future. We refer to this revolution as ABI, a method that former RAND analyst and National Intelligence Council Greg Treverton chairman has called the most important intelligence analytic method coming out of the wars in Iraq and Afghanistan.

1.2 Introducing ABI
Intelligence analysts deployed to Iraq and Afghanistan to hunt down terrorists found that traditional intelligence methods were ill-suited for the mission. The traditional intelligence cycle begins with the target in mind (Figure 1.3), but terrorists were usually indistinguishable from other people around them. The analysts—digital natives savvy in visual analytic tools—began by integrating already collected data in a geographic area. Often, the only common metadata between two data sets was time and location so they applied spatiotemporal analytic methods to develop trends and patterns from large, diverse data sets. These data sets described activities: events and transactions conducted by entities (people or vehicles) in an area.

Often, the only common metadata between two data sets was time and location so they applied spatiotemporal analytic methods to develop trends and patterns from large, diverse data sets. These data sets described activities: events and transactions conducted by entities (people or vehicles) in an area. Sometimes, the analysts would discover a series of unusual events that correlated across data sets. When integrated, it represented the pattern of life of an entity. The entity sometimes became a target. The subsequent collection and analysis on this entity, the resolution of identity, and the anticipation of future activities based on the pattern of life produced a new range of intelligence products that improved the effectiveness of the counterterrorism mission. This is how ABI got its name.

ABI is a new methodology—a series of analytic methods and enabling technologies—based on the following four empirically derived principles, which are distinct from traditional intelligence methods.
• Georeference to discover: Focusing on spatially and temporally correlating multi-INT data to discover key events, trends, and patterns.
• Data neutrality: Prizing all data, regardless of the source, for analysis.
• Sequence neutrality: Realizing that sometimes the answer arrives before you ask the question.

While various intelligence agencies, working groups, and government bodies have offered numerous definitions for ABI, we define it as “a set of spatiotemporal analytic methods to discover correlations, resolve unknowns, understand networks, develop knowledge, and drive collection using diverse multi-INT data sets.”

ABI’s most significant contribution to the fourth age of intelligence is a shift in focus of the intelligence process from reporting the known to discovery of the unknown.
• Integration before exploitation: Correlating data as early as possible, rather than relying on vetted, finished intelligence products, because seemingly insignificant events in a single INT may be important when integrated across multi-INT.

1.2.1 The Primacy of Location
When you think about it, everything and everybody has to be somewhere.
—The Honorable James R. Clapper1, 2004

The primacy of location is the central principle behind the new intelligence methodology ABI. Since everything happens somewhere, all activities, events, entities, and relationships have an inherent spatial and temporal component whether it is known a priori or not.
Hard problems cannot usually be solved with a single data set. The ability to reference multiple data sets across multiple intelligence domains— multi-INT—is a key enabler to resolve entities that lack a signature in any single domain of collection. In some cases, the only common metadata between two data sets is location and time— allowing for location-based correlation of the observations in each data set where the strengths of one compensate for the weaknesses in another.

…the tipping point for the fourth age and key breakthrough for the ABI revolution was the ability and impetus to integrate the concept of location into visual and statistical analysis of large, complex data sets. This was the key breakthrough for the revolution that we call ABI.

1.2.2 From Target-Based to Activity-Based
The paradigm of intelligence and intelligence analysis has changed, driven primarily by the shift in targets from the primacy of nation-states to transnational groups or irregular forces—Greg Treverton, RAND

A target can be a physical location like an airfield or a missile silo. Alternatively, it can be an electronic target, like a specific radio-frequency emission or a telephone number. Targets can be individuals, such as spies who you want to recruit. Targets might be objects like specific ships, trucks, or satellites. In the cyberdomain, a target might be an e-mail address, an Internet protocol (IP) address, or even a specific device. The target is the subject of the intelligence question. The linear cycle of planning and direction, collection, processing and exploitation, analysis and production, and dissemination begins and ends with the target in mind.
The term “activity-based” is the antithesis of the “target-based” intelligence model. This book describes methods and techniques for intelligence analysis when the target or the target’s characteristics are not known a priori. In ABI, the target is the output of a deductive analytic process that begins with unresolved, ambiguous entities and a data landscape dominated by events and transactions.

Targets in traditional intelligence are well-defined, predictable adversaries with a known doctrine. If the enemy has a known doctrine, all you have to do is steal the manual and decode it, and you know what they will do.

In the ABI approach, instead of scheduled collection, incidental collection must be used to gather many (possibly irrelevant) events, transactions, and observations across multiple domains. In contrast to the predictable, linear, inductive approach, analysts apply deductive reasoning to eliminate what the answer is not and narrow the problem space to feasible alternatives. When the target blends in with the surroundings, a durable, “sense-able” signature may not be discernable. Proxies for the entity, such as a communications device, a vehicle, a credit card, or a pattern of actions, are used to infer patterns of life from observations of activities and transactions.

Informal collaboration and information sharing evolved as geospatial analysis tools became more democratized and distributed. Analysts share their observations—layered as dots on a map—and tell spatial stories about entities, their activities, their transactions, and their networks.

While traditional intelligence has long implemented techniques for researching, monitoring, and searching, the primary focus of ABI methods is on discovery of the unknown, which represents the hardest class of intelligence problems.

1.2.3 Shifting the Focus to Discovery
All truths are easy to understand once they are discovered; the point is to discover them.
—Galileo Galilei

The lower left corner of Figure 1.4 represents known-knowns: monitoring. These are known locations or targets, and the focus of the analytic operation is to monitor them for change.
the targets, location, behaviors, and signatures are all known. The intelligence task is monitoring the location for change and alerting when there is activity.

The next quadrant of interest is in the upper left of Figure 1.4. Here, the behaviors and signatures are unknown, but the targets or locations are known.

The research task builds deep contextual analytic knowledge to enhance understanding of known locations and targets, which can then be used to identify more targets for monitoring and enhance the ability to provide warning.

The lower right quadrant of Figure 1.4, search, requires looking for a known signature/behavior in an unknown location.
Searching previously undiscovered areas for the new equipment is search. For obvious reasons, this laborious task is universally loathed by analysts.

The “new” function and the focus of ABI methods is the upper right. You don’t know what you’re looking for, and you don’t know where to find it. This has always been the hardest problem for intelligence analysts, and we characterize it as “new” only because the methods, tools, policies, and tradecraft have only recently evolved to the point where discovery is possible outside of simple serendipity.

Discovery is a data-driven process. Analysts, ideally without bias, explore data sets to detect anomalies, characterize patterns, investigate interesting threads, evaluate trends, eliminate the impossible, and formulate hypotheses.

Typically, analysts who excel at discovery are detectives. They exhibit unusual curiosity, creativity, and critical thinking skills. Generally, they tend to be rule breakers. They get bored easily when tasked in the other three quadrants. New tools are easy for them to use. Spatial thinking, statistical analysis, hypothesis generation, and simulation make sense. This new generation of analysts—largely comprised of millennials hired after 9/11— catalyzed the evolution of ABI methods because they were placed in an environment that required a different approach. Frankly, their lack of experience with the traditional intelligence process created an environment where something new and different was possible.

1.2.4 Discovery Versus Search

Are we saying that hunting terrorists is the same as house shopping? Of course not, but the processes have their similarities. Location (and spatial analysis) is central to the search, discovery, research, and monitoring process. Browsing metadata helps triage information and focus the results. The problem constantly changes as new entities appear or disappear. Resources are limited and it’s impossible to action every lead…

1.2.6 Summary: The Key Attributes of ABI
ABI is a new tradecraft, focused on discovering the unknown, that is well-suited for advanced multi-INT analysis of nontraditional threats in a “big data” environment.

1.3 Organization of this Textbook
This textbook is directed at entry-level intelligence professionals, practicing engineers, and research scientists familiar with general principles of intelligence and analysis. It takes a unique perspective on the emerging methods and techniques of ABI with a specific focus on spatiotemporal analytics and the associated technology enablers.

The seminal concept of “pattern of life” is introduced in Chapter 8. Chapter 8 exposes the nuances of “pattern of life” versus pattern analysis and describes how both concepts can be used to understand complex data and draw conclusions using the activities and transactions of entities. The final key concept, incidental collection, is the subject of Chapter 9. Incidental collection is a core mindset shift from target-based point collection to wide area activity-based surveillance.

A unique feature of this textbook is its focus on applications from the public domain.

1.4 Disclaimer About Sources and Methods
Protecting sources and methods is the most paramount and sacred duty of intelligence professionals. This central tenet will be carried throughout this book. The development of ABI was catalyzed by advances in commercial data management and analytics technology applied to unique sources of data. Practitioners deployed to the field have the benefit of on-the-job training and experience working with diverse and difficult data sets. A primary function of this textbook is to normalize understanding across the community and inform emerging intelligence professionals of the latest advances in data analysis and visual analytics.

All of the application examples in this textbook are derived from information entirely in the public domain. Some of these examples have corollaries to intelligence operations and intelligence functions. Some are merely interesting applications of the basic principles of ABI to other fields where multisource correlation, patterns of life, and anticipatory analytics are commonplace. Increasingly, commercial companies are using similar “big data analytics” to understand patterns, resolve unknowns, and anticipate what may happen.

1.6 Suggested Readings

Readers unfamiliar with intelligence analysis, the disciplines of intelligence, and the U.S. intelligence community are encouraged to review the following texts before delving deep into the world of ABI:
• Lowenthal, Mark M., Intelligence: From Secrets to Policy. Lowenthal’s legendary text is the premier introduction to the U.S. intelligence community, the primary principles of intelligence, and the intelligence relationship to policy. The frequently updated text has been expanded to include Lowenthal’s running commentary on various policy issues including the Obama administration, intelligence reform, and Wikileaks. Lowenthal, once the assistant director of analysis at the CIA and vice chairman of Evaluation for the National Intelligence Council, is the ideal intellectual mentor for an early intelligence professional.
• George, Roger Z., and James B. Bruce, Analyzing Intelligence: Origins, Obstacles, and Innovations. This excellent introductory text by two Georgetown University professors is the most comprehensive text on analysis currently in print. It provides an overview of analysis tradecraft and how analysis is used to produce intelligence, with a focus on all-source intelligence.
• Heuer, Richards J., The Psychology of Intelligence Analysis. This book is required reading for intelligence analysts and documents how analysts think. It introduces the method of analysis of competing hypotheses (ACH) and deductive reasoning, a core principle of ABI.
• Heuer, Richards J., and Randolph H. Pherson, Structured Analytic Techniques for Intelligence Analysis. An extension of Heuer’s previous work, this is an excellent handbook of techniques for all-source analysts. Their techniques pair well with the spatiotemporal analytic methods discussed in this text.
• Waltz, Edward, Quantitative Intelligence Analysis: Applied Analytic Models, Simulations, and Games. Waltz’s highly detailed book describes modern modeling techniques for intelligence analysis. It is an essential companion text to many of the analytic methods described in Chapters 12–16.

2
ABI History and Origins

Over the past 15 years, ABI has entered the intelligence vernacular. Former NGA Letitia Long, said it is “a new foundation for intelligence analysis, as basic and as important as photographic interpretation and imagery analysis became during World War II”

2.1 Wartime Beginnings
ABI methods have been compared to many other disciplines including submarine hunting and policing, but the modern concepts of ABI trace their roots to the Global War on Terror. According to Long, “Special operations led the development of GEOINT-based multi-INT fusion techniques on which ABI is founded”

2.2 OUSD(I) Studies and the Origin of the Term ABI
During the summer of 2008 the technical collection and analysis (TCA) branch within the OUSD(I) determined the need for a document defining “persistent surveillance” in support of irregular warfare. The initial concept was a “pamphlet” that would briefly define persistence and expose the reader to the various surveillance concepts that supported this persistence. U.S. Central Command, the combatant command with assigned responsibility throughout the Middle East, expressed interest in using the pamphlet as a training aid and as a means to get its components to use the same vocabulary.

ABI was formally defined by the now widely circulated “USD(I) definition”:
A discipline of intelligence, where the analysis and subsequent collection is focused on the activity and transactions associated with an entity, a population, or an area of interest

There are several key elements of this definition. First, OUSD(I) sought to define ABI as a separate discipline of intelligence like HUMINT or SIGINT: SIGINT is to the communications domain as activity-INT is to the human domain. Recognizing that the INTs are defined by an act of Congress, this definition was later softened into a “method” or “methodology.”
The definition recognizes that ABI is focused on activity (composed of events and transactions, further explored in Chapter 4) rather than a specific target. It introduces the term entity, but also recognizes that analysis of the human domain could include populations or areas, as recognized by the related study called “human geography.”

Finally, the definition makes note of analysis and subsequent collection, also sometimes referred to as analysis driving collection. This emphasizes the importance of analysis over collection—a dramatic shift from the traditional collection-focused mindset of the intelligence community. To underscore the shift in focus from targets to entities, the paper introduced the topic of “human domain analytics.”

2.3 Human Domain Analytics
Human domain analytics is the global understanding of anything associated with people. The human domain provides the context and understanding of the activities and transactions necessary to resolve entities in the ABI method.

• The first is biographical information, or “who they are.” This includes information directly associated with an individual.
• The second data type is activities, or “what they do.” This data category associates specific actions to an entity.
• The third data category is relational, or “who they know,” the entities’ family, friends, and associates.
• The final data category is contextual (metaknowledge), which is information about the context or the environment in which the entity is found.

Examples include most of the information found within the sociocultural/human terrain studies. Taken in total, these data categories support ABI analysts in the analysis of entities, identity resolution of unknown entities, and placing the entities actions in a social context.

2.5 ABI-Enabling Technology Accelerates

In December 2012, BAE Systems was awarded a multiyear $60-million contract to provide “ABI systems, tools, and support for mission priorities” under the agency’s total application services for enterprise requirements (TASER) contract [13]. While these technology developments would bring new data sources to analysts, they also created confusion as the tools became conflated with the analytical methodology they were designed around. The phrase “ABI tool” would be attached to M111 and its successor program awarded under TASER.

2.6 Evolution of the Terminology
The term ABI and the introduction of the four pillars was first mentioned to the unclassified community during an educational session hosted by the U.S. Geospatial Intelligence Foundation (USGIF) at the GEOINT Symposium in 2010, but the term was introduced broadly in comments by Director of National Intelligence (DNI) Clapper and NGA director Long in their remarks at the 2012 symposium [14, 15].

As wider intelligence community efforts to adapt ABI to multiple missions took shape, the definition of ABI became generalized and evolved to a broader perspective as shown in Table 2.1. NGA’s Gauthier described it as “a set of methods for discovering patterns of behavior by correlating activity data at network speed and enormous scale” [16, p. 1]. It was also colloquially described by Gauthier and Long as, “finding things that don’t want to be found.”

2.7 Summary
Long described ABI as “the most important intelligence methodology of the first quarter of the 21st century,” noting the convergence of cloud computing technology, advanced tracking algorithms, inexpensive data storage, and revolutionary tradecraft that drove adoption of the methods [1].

3
Discovering the Pillars of ABI
The basic principles of ABI have been categorized as four fundamental “pillars.” These simple but powerful principles were developed by practitioners by cross-fertilizing best practices from other disciplines and applying them to intelligence problems in the field. They have evolved and solidified over the past five years as a community of interest developed around the topic. This chapter describes the origin and practice of the four pillars: georeference to discover, data neutrality, sequence neutrality, and integration before exploitation.

3.1 The First Day of a Different War
The U.S. intelligence community and most of the broader U.S. and western national security apparatus, was created to fight—and is expertly tuned for—the bipolar, state-centric conflict of the Cold War. Large states with vast bureaucracies and militaries molded in their image dominated the geopolitical landscape.

3.2 Georeference to Discover: “Everything Happens Somewhere”
Georeference to discover is the foundational pillar of ABI. It was derived from the simplest of notions but proves that simple concepts have tremendous power in their application.

Where activity happens—the spatial component—is the one aspect of these diverse data that is (potentially) common. The advent of the global positioning system (GPS)—and perhaps most importantly for the commercial realm, the de-activation of a mode called “selective availability”—has made precisely capturing “where things happen” move from the realm of science fiction to the reality of day-to-day living. With technological advances, location has become knowable.

3.2.1 First-Degree Direct Georeference
The most straightforward of these is direct georeferencing, which is where machine-readable geospatial content in the form of a coordinate system or known cadastral system is present in the metadata of a type of information. An example of this is metadata (simply, “data about data”) of a photo a GPS-enabled handheld camera or cell phone, for example, might give a series of GPS coordinates in degrees-minutes-seconds format.

3.2.2 First-Degree Indirect Georeference
By contrast, indirect georeferencing contains spatial information in non-machine-readable content, not ready for ingestion into a GIS.

an example of a metadata-based georeference in the same context would be a biographical profile of John Smith with the metadata tag “RESIDENCE: NOME, ALASKA.”

3.2.3 Second-Degree Georeference
Further down the georeferencing rabbit hole is the concept of a second-degree georeference. This is a special case of georeferencing where the content and metadata contain no first-degree georeferences, but analysis of the data in its context can provide a georeference.

For example, a poem about a beautiful summer day might not contain any first-degree georeferences, as it describes only a generic location. By reconsidering the poem as the “event” of “poem composition, a georeference can be derived. Because the poet lived at a known location, and the date of the poem’s composition is also known, the “poem composition event” occurred at “the poet’s house” on “the date of composition,” creating a second-degree georeference for a poem [5].
The concept of second-degree georeferencing is how we solve the vexing problem of data that does not appear, at first glance, to be “georeferenceable.” The above example shows how, by deriving events from data, we can identify activity that is more easily georeferenceable. This is one of the strongest responses to critics of the ABI methodology who argue that much, if not most, data does not lend itself to the georeferencing and overall data- conditioning process.

3.3 Discover to Georeference Versus Georeference to Discover
It is also important to contrast the philosophy of georeference to discover with the more traditional mindset of discover to georeference. Discover to georeference is a concept often not given a name but aptly describes the more traditional approach to geographically referencing information. This traditional process, based on keyword, relational, or Boolean-type queries, is illustrated in Figure 3.2. Often, the georeferencing that occurs in this process is manual, done via copy-paste from free text documents accessible to analysts.

With discover to georeference, the first question that is asked, often unconsciously, is, “This is an interesting piece of information; I should find out where it happened.” It can also be described as “pin-mapping,” based on the process of placing pins in a map to describe events of interest. The key difference is the a priori decision that a given event is relevant or irrelevant before the process of georeferencing begins.

Using the pillar of georeference to discover, the act of georeferencing is an integral part of the act of processing data, through either first- or second-degree attributes. It is the first step of the ABI analytic process and begins before the analyst ever looks at the data.

The act of georeferencing creates an inherently spatial and temporal data environment in which ABI analysts spend the bulk of their time, identifying spatial and temporal co-occurrences and examining said co-occurrences to identify correlations. This environment naturally leads the analyst to seek more sources of data to improve correlations and subsequent discovery.

3.4 Data Neutrality: Seeding the Multi-INT Spatial Data Environment

Data neutrality is the premise that all data may be relevant regardless of the source from which it was obtained. This is perhaps the most overlooked of the pillars of ABI because it is so simple as to be obvious. Some may dismiss this pillar as not important to the overall process of ABI, but it is central to the need to break down the cultural and institutional barriers between INT-specific “stovepipes” and consider all possible sources for understanding entities and their activities.

as the pillars were being developed, the practitioners who helped to write much of ABI’s lexicon spoke of data neutrality as a goal instead of a consequence. The importance of this distinction will be explored below, as it relates to the first pillar of georeference to discover.

Imagine again you are the analyst described in the prior section. In front of you is a spatial data environment in your GIS consisting of data obtained from many different sources of information, everything from reports from the lowest level of foot patrols to data collected from exquisite national assets. This data is represented as vectors: dots and lines (events and transactions) on your map. As you begin to correlate data via spatial and temporal attributes, you realize that data is data, and no one data source is necessarily favored over the others. The second pillar of ABI serves to then reinforce the importance of the first and naturally follows as a logical consequence.

Given that the act of data correlation is a core function of ABI, the conclusion that there can never be “too much” data is inevitable. “Too much,” in the inexact terms of an analyst, often means “more than I have the time, inclination, or capacity to understand,” but more often than that means “data that is not in a format conducive to examination in a single environment.” This becomes an important feature in understanding the data discovery mindset.

As the density of data increases, the necessity for smart technology for attribute correlation becomes a key component of the technical aspects of ABI. This challenge is exacerbated by the fact that some data sources include inherent uncertainty and must be represented by fuzzy boundaries, confidence bands, spatial polygons, ellipses, or circles representing circular error probability (CEP).
The spatial and temporal environment provides two of the three primary data filters for the ABI methodology: correlation on location and correlation on attributes. Attribute-based correlation becomes important to rule out false-positive correlations that have occurred solely based on space and time.

the nature of many data sources almost always requires human judgment regarding correlation across multiple domains or sources of information. Machine learning continues to struggle with these especially as it is difficult to describe the intangible context in which potential correlations occur.

Part of the importance of the data neutrality mindset is realizing the unique perspective that analysts bring to data analysis; moreover, this perspective cannot be easily realized in one type of analyst but is at its core the product of different perspectives collaborating on a similar problem set. This syncretic approach to analysis was central to the revolution of ABI, with technical analysts from two distinct intelligence disciplines collaborating and bringing their unique perspectives to their counterparts’ data sets.

3.5 Integration Before Exploitation: From Correlation to Discovery
The traditional intelligence cycle is a process often referred to as tasking, collection, processing, exploitation, and dissemination (TCPED).

TCPED is a familiar concept to intelligence professionals working in various technical disciplines who are responsible for making sense out of data in domains such as SIGINT and IMINT. Although often depicted as a cycle as shown in Figure 3.4, the process is also described as linear.

From a philosophical standpoint, TCPED makes several key assumptions:

• The ability to collect data is the scarcest resource, which implies that tasking is the most critical part of the data exploitation process. The first step of the process begins with tasking against a target, which assumes the target is known a priori.
• The most efficient way to derive knowledge in a single domain is through focused analysis of data, generally to the exclusion of specific contextual data.
• All data that is collected should be exploited and disseminated.

The limiting factor for CORONA missions was the number of images that could be taken by the satellite. In this model, tasking becomes supremely important: There are many more potential targets that can be imaged on a single roll of film. However, since satellite imaging in the CORONA era was a constrained exercise, processes were put in place to vet, validate, and rank-order tasking through an elaborate bureaucracy.

The other reality of phased exploitation is that it was a product of an adversary with signature and doctrine that, while not necessarily known, could be deduced or inferred over repeated observations. Large, conventional, doctrine-driven adversaries like the Soviet Union not only had large signatures, but their observable activities played out over a time scale that was easily captured by infrequent, scheduled revisit with satellites like CORONA. Although they developed advanced denial and deception techniques employed against imaging systems, both airborne and national, their large, observable activities were hard to hide.

But where is integration in this process? There is no “I,” big or small, in TCPED. Rather, integration was a subsequent step conducted very often by completely different analysts.

In today’s era of reduced observable signatures, fleeting enemies, and rapidly changing threat environments, integration after exploitation is seldom timely enough to provide decision advantage. The traditional concept of integration after exploitation, where finished reporting is only released when it exceeds the single-INT reporting threshold is shown in Figure 3.6. This approach not only suffers from a lack of timeliness but also is limited by the fact that only information deemed significant within a single-INT domain (without the contextual information provided by other INTs) is available for integration. For this reason, the single-INT workflows are often derisively referred to by intelligence professionals as “stovepipes” or as “stovepiped exploitation”.

While “raw” is a loaded term with specific meanings in certain disciplines and collection modalities, the theory is the same: The data you find yourself georeferencing, from any source you can get your hands on, is data that very often, has not made it into the formal intelligence report preparation and dissemination process. It is a very different kind of data, one for which the existing processes of TCPED and the intelligence cycle are inexactly tuned. Much of this information is well below the single-INT reporting threshold in Figure 3.6, but data neutrality tells us that while the individual pieces of information may not exceed the domain thresholds, the combined value of several pieces in an integrated review may not only exceed reporting thresholds but could reveal unique insight to a problem that would be otherwise undiscoverable to the analyst.

TCPED is a dated concept because of its inherent emphasis on the tasking and collection functions. The mindset that collection is a limited commodity influences and biases the gathering of information by requiring such analysts to decide a priori what is important. This is inconsistent with the goals of the ABI methodology. Instead, ABI offers a paradigm more suited to a world in which data has become not a scarcity, but a commodity: the relative de-emphasis of tasking collection versus a new emphasis on the tasking of analysis and exploitation

The result of being awash in data is that no longer can manual exploitation processes scale. New advances in collection systems like the constellation of small satellites proposed by Google’s Skybox will offer far more data than even a legion of trained imagery analysts could possibly exploit. There are several solutions to this problem of “drowning in data”:

• Collect less data (or perhaps, less irrelevant data and more relevant data);
• Integrate data earlier, using correlations to guide labor-intensive exploitation processes;
• Use smart technology to move techniques traditionally deemed “exploitation” into the “processing” stage.

These three solutions are not mutually exclusive, though note that the first two represent philosophically divergent viewpoints on the problem of data. ABI naturally chooses both the second and third solution. In fact, ABI is one of a small handful of approaches that actually becomes far more powerful as the represented data volume of activity increases because of the increased probability of correlations.

The analytic process emphasis in ABI also bears resemblance to the structured geospatial analytic method (SGAM), first posited by researchers at Penn State University

Foraging, then, is not only a process that analysts use but also a type of attitude that seeks to be embedded in the analytical mindset: The process of foraging is a continual one spanning not only specific lines of inquiry but also evolves beyond the boundaries of specific questions, turning the “foraging process” into a consistent quest for more data.

Another implication is precisely where in the data acquisition chain an ABI analyst should ideally be placed. Rather than putting integrated analysis at the end of the TCPED process, this concept argues for placing the analyst as close to the data collection point (or point of operational integration) as possible. While this differs greatly for tactical missions versus strategic missions, the result of placing the analyst as close to the data acquisition and processing components is clear: The analyst has additional opportunities not only to acquire new data but affect the acquisition and processing of data from the ground up, making more data available to the entire enterprise through his or her individual efforts.

3.6 Sequence Neutrality: Temporal Implications for Data Correlation
Sequence neutrality is perhaps the least understood and most complex of the pillars of ABI. The first three pillars are generally easily understood after a sentence or two of explanation (though they have deeper implications for the analytical process as we continually explore their meaning). Sequence neutrality, on the other hand, forces us to consider—and in many ways, reconsider—the implications of temporality with regard to causality and causal reasoning. As ABI moves data analysis to a world governed by correlation rather than causation, the specter of causation must be addressed.

In epistemology, this concept is described as narrative fallacy. Naseem Taleb, in his 2007 work Black Swan, explains it as “[addressing] our limited ability to look at sequences of facts without weaving an explanation into them, or, equivalently, forcing a logical link, an arrow of relationship upon them. Explanations bind facts together. They make them all the more easily remembered; they help them make more sense” [12]. What is important in Taleb’s statement is the concept of sequence: Events occur in order, and we weave a logical relationship around them.

As events happen in sequence, we chain them together even given our limited perspective on the accuracy with which those events represent reality. When assessing patterns—true patterns, not correlations—in single-source data sets, time proves to be a useful filter presuming that the percentage of the “full” data set represented remains relatively consistent. As we introduce additional datasets, the potential gaps multiply causing uncertainty to exponentially increase. In intelligence, as many data sets are acquired in an adversarial rather than cooperative fashion (as opposed to in traditional civil GIS approaches, or even crime mapping approaches), this concept becomes so important that it is given a name: sparse data.

You are integrating the data well before stovepiped exploitation and have created a data-neutral environment in which you can ask complex questions of the data. This enables and illuminates a key concept of sequence neutrality: The data itself drives the kinds of questions that you ask. In this way, we express a key component of sequence neutrality as “understanding that we have the answers to many questions we do not yet know to ask.”

The corollary to this realization is the importance of forensic correlation versus linear-forward correlation. If we have the answers to many questions in our spatial-temporal data environment, it then follows logically that the first place to search for answers—to search for correlations—is in the data environment we have already created. Since the data environment is based on what has already been collected, the resultant approach is necessarily forensic. Look backward, before looking forward.

From card catalogs and libraries we have moved to search algorithms and metadata, allowing us as analysts to quickly and efficiently employ a forensic, research-based approach to seeking correlations.

As software platforms evolved, more intuitive time-based filtering was employed, allowing analysts to easily “scroll through time.” As with many technological developments, however, there was also a less obvious downside related to narrative fallacy and event sequencing: The time slider allowed analysts to see temporally referenced data occur in sequence, reinforcing the belief that because certain events happened after other events, they may have been caused by them. It also made it easy to temporally search for patterns in data sets: useful again in single data sets, but potentially highly misleading in multisourced data sets due to the previously discussed sparse data problem. Sequence neutrality, then, is not only an expression of the forensic mindset but a statement of warning to the analyst to consider the value of sequenced versus nonsequenced approaches to analysis. Humans have an intuitive bias to see causality when there is only correlation. We caution against use of advanced analytic tools without the proper training and mindset adjustment.
3.6.1 Sequence Neutrality’s Focus on Metadata: Section 215 and the Bulk Telephony Metadata Program Under the USA Patriot Act

By positing that all data represents answers to certain questions, it implores us to collect and preserve the maximum amount of data as possible, limited only by storage space and cost. It also begs the creation of indexes within supermassive data sets, allowing us to zero in on key attributes of data that may only represent a fraction of the total data size.

A controversial provision of the USA PATRIOT act, Section 215, allows the director of the Federal Bureau of Investigation (or designee) to seek access to “certain business records” which may include “any tangible things (including books, records, papers, documents, and other items) for an investigation to protect against international terrorism or clandestine intelligence activities, provided that such investigation of a United States person is not conducted sole upon the basis of activities protected by the first amendment to the Constitution”.

3.7 After Next: From Pillars, to Concepts, to Practical Applications
The pillars of ABI represent the core concepts, as derived by the first practitioners of ABI. Rather than a framework invented in a classroom, the pillars were based on the actual experiences of analysts in the field, working with real data against real mission sets. It was in this environment, forged by the demands of asymmetric warfare and enabled by accelerating technology, in which ABI emerged as one of the first examples of data-driven intelligence analysis approaches, focused primarily on spatial and temporal correlation as a means to discover.

The foraging-sensemaking, data-centric, sequence-neutral analysis paradigm of ABI conflicts with the linear- forward TCPED-centric approaches used in “tipping-cueing” constructs. The tip/cue concept slews (moves) sensors to observed or predicted activity based on forecasted sensor collection, accelerating warning on known knowns. This concept ignores the wealth of known data about the world in favor of simple additive constructs that if not carefully understood, risk biasing analysts with predetermined conclusions from arrayed collection systems

While some traditional practitioners are uncomfortable with the prospect of “releasing unfinished intelligence,” the ABI paradigm—awash in data—leverages the power of “everything happens somewhere” to discover the unknown. As a corollary, when many things happen in the same place and time, this is generally an indication of activity of interest. Correlation across multiple data sources improves our confidence in true positives and eliminates false positives.

4
The Lexicon of ABI

The development of ABI also included the development of a unique lexicon, terminology, and ontology to accompany it.

Activity data “comprises physical actions, behaviors, and information received about entities. The focus of analysis in ABI, activity is the overarching term used for ‘what entities do.’ Activity can be subdivided into two types based on its accompanying metadata and analytical use: events and transactions”.

4.1 Ontology for ABI
One of the challenges of intelligence approaches for the data-rich world that we now live in is integration of data.

As the diversity of data increased, analysts were confronted with the problem that most human analysts deal with today: How does one represent diverse data in a common way?

An ontology is the formal naming and documentation of interrelationships between concepts and terms in a discipline. Established fields like biology and telecommunications have well-established standards and ontologies. As the diversity of data and the scope of a discipline increases, so does the complexity of the ontology. If the ontology becomes too rigid and requires too many committee approvals to adapt to change, it cannot easily account for new data types that emerge as technology advances.
Moreover, with complex ontologies for data, complex environments are required for analysis, and it becomes extraordinarily difficult to correlate and connect data (to say nothing of conclusions derived from data correlations) in any environment other than a pen-to-paper notebook or a person’s mind.

4.2 Activity Data: “Things People Do”
The first core concept that “activity” data reinforces is the idea that ABI is ultimately about people, which, in ABI, we primarily refer to as “entities.”

Activity in ABI is information relating to “things people do.” While this is perhaps a simplistic explanation, it is important to the role of ABI. In ABI parlance, activities are not about places or equipment or objects.

4.2.1 “Activity” Versus “Activities”
The vernacular and book title use the term “activity-based intelligence,” but in early discussions, the phrase was “activities-based intelligence.” Activities are the differentiated, atomic, individual activities of entities (people). Activity is a broad construct to describe aggregated activities over space and time.

4.2.2 Events and Transactions

The definition in the introduction to this chapter defined activity data as “physical actions, behaviors, and information received about entities” but also divided activity data into two categories: events and transactions. These types are distinguished based on their metadata and utility for analysis. To limit the scope of the ABI ontology (translation: to avoid making an ontology that describes every possible action that could be performed by every possible type of entity), we specifically categorize all activity data into either an event or transaction based on the metadata that accompanies the data of interest.

a person living in a residence—provides a very different kind of event, one that is far less specific. While a residential address or location can also be considered biographical data, the fact of a person living in a specific place is treated as an event because of its spatial metadata component.
In all three examples, spatial metadata is the most important component.

The concept of analyzing georeferenced events is not specific to military or intelligence analysis. The GDELT project maintains a 100% free and open database of 300 kinds of events using data in over 100 languages with daily updates from January 1, 1979, to the present. The database contains over 400 million georeferenced data points

Characterization is an important concept because it can sometimes appear as if we are using events as a type of context. In this way, activities can characterize other activities. This is important because most activity conducted by entities does not occur in a vacuum; it occurs simultaneously with activities conducted by different entities that occur in either the same place or time—and sometimes both.

Events that occur in close proximity provide us an indirect way to relate entities together based on individual data points. There is, however, a more direct way to relate entities together through the second type of activity data: transactions.

4.2.3 Transactions: Temporal Registration

Transactions in ABI provide us with our first form of data that directly relates entities. A transaction is defined as “an exchange of information between entities (through the observation of proxies) and has a finite beginning and end”. This exchange of information is essentially the instantaneous expression of a relationship between two entities. This relationship can take many forms, but it exists for at least the duration of the transaction.

Transactions are of supreme importance in ABI because they represent relationships between entities. Transactions are typically observed between proxies, or representations of entities, and are therefore indirect representations of the entities themselves.
For example, police performing a stakeout of a suspect’s home may not observe the entity of interest, but they may follow his or her vehicle. The vehicle is a proxy. The departure from the home is an event. The origin-destination motion of the vehicle is a transaction. Analysts use transactions to connect entities and locations together, depending on the type of transaction.

Transactions come in two major subtypes: physical transactions and logical transactions. Physical transactions are exchanges that occur primarily in physical space, or, in other words, the real world.

Logical transactions represent the other major subtype of transaction. These types of transactions are easier to join directly to proxies for entities (and therefore, the entities themselves) because the actual transaction occurs in cyberspace as opposed to physical space.

4.2.4 Event or Transaction? The Answer is (Sometimes) Yes

Defining data as either an event or transaction is as much a function of understanding its role in the analytical process as much as it is about recognizing present metadata fields and “binning” it into one of two large groups. Consequently, there are certain data types that can be treated as both events and transactions depending on the circumstances and analytical use.

4.3 Contextual Data: Providing the Backdrop to Understand Activity

One of the important points to understand with regard to activity data is that its full meaning is often unintelligible without understanding the context in which observed activity is occurring.

“Contextualization is crucial in transforming senseless data into real information”

Activity data in ABI is the same: To understand it fully, we must understand the context in which it occurs, and context is a kind of data all unto itself.
There are many different kinds of contextual data.

Activity data and contextual data help understand the nature of events and transactions—and sometimes even to anticipate what might happen.

4.4 Biographical Data: Attributes of Entities

Biographical data provides information about an entity: name, age, date of birth, and other similar attributes. Because ABI is as much about entities as it is about activity, considering the types of data that apply specifically to entities is extremely important. Biographical data provides analysts with context to understand the meaning of activity conducted between entities.

the process of entity resolution (fundamentally, disambiguation) enables us to understand additional biographical information about entities.

Police departments, intelligence agencies, and even private organizations have long desired to understand specific details about individuals; therefore, what is it that makes ABI a fundamentally different analytic methodology? The answer is in the relationship of this biographical data to events and transactions described in Sections 4.2.2–4.2.4 and the fusion of different data types across the ABI ontology at speed and scale.

Unlike in more traditional approaches, wherein analysts might start with an individual of interest and attempt to “fill out” the baseball card, ABI starts with the events and transactions (activity) of many entities, ultimately attempting to narrow down to specific individuals of interest. This is one of the techniques that ABI uses to conquer the problem of unknown individuals in a network, which guards against the possibility that the most important entities might be ones that are completely unknown to individual analysts.

The final piece of the “puzzle” of ABI’s data ontology is relating entities to each other—but unlike transactions, we begin to understand generalized links and macronetworks. Fundamentally, this is relational data.

4.5 Relational Data: Networks of Entities
Entities do not exist in vacuums.

considering the context of relationships between entities is also of extreme importance in ABI. Relational data tells us about the entity’s relationships to other entities, through formal and informal institutions, social networks, and other means.

Initially, it is difficult to differentiate relational data from transaction data. Both data types are fundamentally about relating entities together; what, therefore, is the difference between the two?

The answer is that one type— transactions—represents specific expressions of a relationship, while the other type— relational data—is the generalized data based on both biographical data and activity data relevant to specific entities.

The importance of understanding general relationships between entities cannot be overstated; it is one of several effective ways to contextualize specific expressions of relationships in the form of transactions. Traditionally, this process would be to simply use specific data to form general conclusions (an inductive process, explored in Chapter 5). In ABI, however, deductive and abductive processes are preferred (whereby the general informs our evaluation of the specific). In the context of events and transactions, our understanding of the relational networks pertinent to two or more entities can help us determine whether connections between events and transactions are the product of mere coincidence (or density of activity in a given environment) or the product of a relationship between individuals or networks.

SNA can be an important complementary approach to ABI, but each focuses on different aspects of data and seeks a fundamentally different outcome, indicating that the two are not duplicative approaches.

What ABI and SNA share, however, is an appreciation for the importance of understanding entities and relationships as a method for answering particular types of questions.

4.6 Analytical and Technological Implications

Relational and biographical information regarding entities is supremely important for contextualizing events and transactions, but unlike earlier approaches to analysis and traditional manhunting, focusing on specific entities from the outset is not the hallmark innovation of ABI.

5
Analytical Methods and ABI

Over the past five years, the intelligence community and the analytic corps have adopted the term ABI and ABI- like principles into their analytic workflows. While the methods have easily been adapted by those new to the field —especially those “digital natives” with strong analytic credentials from their everyday lives—traditionalists have been confused about the nature of this intelligence revolution.

5.1 Revisiting the Modern Intelligence Framework

John Hollister Hedley, a long-serving CIA officer and editor of the President’s Daily Brief (PDB) outlines three broad categories of intelligence: 1) strategic or “estimative” intelligence; 2) current intelligence, and 3) basic intelligence

“finished” intelligence continues to be the frame around which much of today’s intelligence literature is constructed.

our existing intelligence framework needs expansion to account for ABI and other methodologies sharing similar intellectual approaches.

5.2 The Case for Discovery
In an increasingly data-driven world, the possibility of analytical methods that do not square with our existing categories of intelligence seems inevitable. The authors argue that ABI is the first of potentially many methods that belong in this category, which can be broadly labeled as “discovery,” sitting equally alongside current intelligence, strategic intelligence, and basic intelligence.

What characterizes discovery? Most intelligence analysts, many of whom are naturally inquisitive, already conduct aspects of discovery instinctively as they go about their day-to-day jobs. But there has been a growing chorus of concerns from both the analytical community and IC leadership that intelligence production has become increasingly driven by specific tasks and questions posed by policymakers and warfighters. In part, this is understandable: If policymakers and warfighters are the two major customer sets served by intelligence agencies, then it is natural for these agencies to be responsive to the perceived or articulated needs of those customers. However, need responsiveness does not encompass the potential to identify correlations and issues previously unknown or poorly understood by consumers of intelligence production. This is where discovery comes in: the category of intelligence primarily focused on identifying relevant and previously unknown potential information to provide decision advantage in the absence of specific requirements to do so.

institutional innovation often assumes (implicitly) a desire to innovate equally distributed across a given employee population. This egalitarian model of innovation, however, is belied by actual research showing that creativity is more concentrated in certain segments of the population

If “discovery” in intelligence is similar to “innovation” in technology, one consequence is that the desire to perform—and success at performing— “discovery” is unequally distributed across the population of intelligence analysts, and that different analysts will want to (and be able to) spend different amounts of time on “discovery.” Innovation is about finding new things based on a broad understanding of needs but lacking specific subtasks or requirements

ABI is one set of methods under the broad heading of discovery, but other methods—some familiar to the world of big data—also fit in the heading. ABI’s focus on spatial and temporal correlation for entity resolution through disambiguation is a specific set of methods designed for the specific problem of human networks

data neutrality’s application puts information gathered from open sources and social media up against information collected from clandestine and technical means. Rather than biasing analysis in favor of traditional sources of intelligence data, social media data is brought into the fold without establishing a separate exploitation workflow.

One of the criticisms of the Director of National Intelligence (DNI) Open Source Center, and the creation of OSINT as another domain of intelligence, was that it effectively served to create another stovepipe within the intelligence world,

ABI’s successes came from partnering, not replacing, single-INT analysts in battlefield tactical operations centers (TOCs).

The all-source analysis field is more typically (though not always) focused on higher-order judgments and adversary intentions; it effectively operates at a level of abstraction above both ABI and single-INT exploitation.

This is most evident in approaches to strategic issues dealing with state actors; all-source analysis seeks to provide a comprehensive understanding of current issues enabling intelligent forecasting of future events, while ABI focuses on entity resolution through disambiguation (using identical methodological approaches found on the counterinsurgency/counterterrorism battlefield) relevant to the very same state actors.

5.4 Decomposing an Intelligence Problem for ABI

One of the critical aspects of properly applying ABI is about asking the “right” questions. In essence, the challenge is to decompose a high-level intelligence problem into a series of subproblems, often posed as questions, that can potentially be answered using ABI methods.

As ABI focuses on disambiguation of entities, the problem must be decomposed to a level where disambiguation of particular entities helps fill intelligence gaps relating to the near-peer state power. As subproblems are identified, approaches or methods to address the specific subproblems are aligned to each subproblem in turn, creating an overall approach for tackling the larger intelligence problem. In this case, ABI does not become directly applicable to the overall intelligence problem until the subproblem specifically dealing with the pattern of life of a group of entities is extracted from the larger problem.

Another example problem to which ABI would be applicable is identifying unknown entities outside of formal leadership structures who may be key influencers outside of the given hierarchy through analyzing entities present at a location known to be associated with high-level leadership of the near-peer state.

5.5 The W3 Approaches: Locations Connected Through People and People Connected Through Locations
Once immersed in a multi-INT spatial data environment, there are two major approaches used in ABI to establish network knowledge and connect entities . These two approaches are summarized below, both dealing with connecting entities and locations. Together they are known as “W3” approaches, combining “who” and “where” to extend analyst knowledge of social and physical networks.

5.5.1 Relating Entities Through Common Locations
This approach focuses on connecting entities based on presence at common locations. Analysis begins with a known entity and then moves to identifying other entities present at the same location.

The process for evaluating strength of relationship based on locational proximity and type of location relies on the concepts of durability and discreteness, a concept further explored in Chapter 7. Colloquially, this process is known as “who-where-who,” and it is primarily focused on building out logical networks.

A perfect example of building out logical networks through locations begins with two entities—people, unique individuals—observed at a private residence on multiple occasions. In a spatial data environment, the presence of two entities at the same location at multiple points in time might bear investigation into the various attributes of those entities. The research process initially might show no apparent connection between them, but by continuing to understand various aspects of the entities, the locational connection may be corroborated and “confirmed” via the respective attributes of the entities. This could take many forms, including common social contacts, family members, and employers.

The easiest way to step through “who-where-who” is through a series of four questions. These questions offer an analyst the ability to logically step through a potential relationship through the colocation of individual entities. The first question is: “What is the entity or group of entities of interest?” This is often expressed as a simple “who” in shorthand, but the focus here is in identifying a specific entity or group of entities that are of interest to the analyst. Note that while ABI’s origins are in counterterrorism and thus, the search for “hostile entities,” the entities of interest could also be neutral or friendly entities, depending on what kind of organization the analyst is a part of.

In practice, this phase will consist of using known entities of interest and examining locations where the entities have been present. This process can often lead to constructing a full “pattern of life” for one or more specific entities, but it can also be as simple as identifying locations where entities were located on one or more specific occasions

The first question is: “What is the entity or group of entities of interest?” This is often expressed as a simple “who” in shorthand, but the focus here is in identifying a specific entity or group of entities that are of interest to the analyst.

In practice, this phase will consist of using known entities of interest and examining locations where the entities have been present. This process can often lead to constructing a full “pattern of life” for one or more specific entities, but it can also be as simple as identifying locations where entities were located on one or more specific occasions
The second question is: “Where has this entity been observed?” At this point, focus is on the spatial-temporal data environment. The goal here is to establish various locations where the entity was present along with as precise a time as possible.

The third question is: “What other entities have also been observed at these locations?” This is perhaps the most important of the four questions in this process. Here, the goal is to identify entities co-occurring with the entity or entities of interest. The focus is on spatial co-occurrence, ideally over multiple locations. This intuitive point— more co-occurrences increases the likelihood of a true correlation—is present in the math used to describe a linear correlation function:

the characteristics of each location considered must be evaluated in order to separate out “chance co-occurrences” versus “demonstrative co-occurrences.” In addition, referring back to the pillar of sequence neutrality, it is vitally important to consider the potential for co-occurrences that are temporally separated. This often occurs when networks of entities change their membership but use consistent locations for their activities, as is the case with many clubs and societies.

The fourth and final question is: “Is locational proximity indicative of some kind of relationship between the initial entity and the discovered entity?”
the goal is to take an existing network of entities and identify additional entities that may have been partially known or completely unknown. The overwhelming majority of entities must interact with each other, particularly to achieve common goals, and this analytic technique helps identify entities that are related based on common locations before metadata or attribute- based explicit relationships.

5.5.2 Relating Locations Through Common Entities
This approach is the inverse of the previous approach and focuses on connecting locations based on the presence of common entities. By tracking entities to multiple locations, connections between locations can be revealed.

Where the previous process is focused on building out logical networks where entities are the nodes, this process focuses on building out either logical or physical networks where locations are the nodes. While at first this can seem less relevant to a methodology focused on understanding networks of entities, understanding the physical network of locations helps indirectly reveal information about entities who use physical locations for various means (nefarious and nonnefarious alike).

The first question asked in this process is, “What is the initial location or locations of interest?” This is the most deceptively difficult question to answer, because it involves bounding the initial area of interest.

The next question brings us back to entities: “What entities have been observed at this location?” Whether considering one or more locations, this is where specific entities can be identified or partially known entities can be identified for further research. This is one of the core differences between the two approaches, in that there is no explicit a priori assumption regarding the entities of interest. This question is where pure “entity discovery” occurs, as focusing on locations allows entities not discovered through traditional, relational searches to emerge as potentially relevant players in multiple networks of interest.

The third question is, “Where else have these entities been observed?” This is where a network of related locations is principally constructed. Based on the entities—or networks—discovered in the previous phase of research, the goal is now to associate additional, previously unknown locations based on common entities.

One of the principal uses of this information is to identify locations that share a common group of entities. In limited cases, this approach can be predictive, indicating locations that entities may be associated with even if they have not yet been observed at a given location.

The final question is thus, “Is the presence of common entities indicative of a relationship between locations?”
Discovering correlation between entities and locations is only the first step, as subsequently contextual information must be examined dispassionately to support or refute the hypothesis suggested by entity commonality.

At this point, the assessment aspect of both methods must be discussed. By separating what is “known” to be true versus what is “believed” to be true, analysts can attempt to provide maximum value to intelligence customers.

5.6 Assessments: What Is Known Versus What Is Believed

At the end of both methods is an assessment question: Has the process of moving from vast amounts of data to specific data about entities and locations provided correlations that demonstrate actual relationships between entities and/or locations?

Correlation versus causation can quickly become a problem in the assessment phase, as well as the role of chance in spatial or temporal correlation of data. The assessment phase of each method is designed to help analysts separate out random chance from relevant relationships in the data.

ABI adapts new terminology from a classic problem of intelligence assessments, which is separating fact from belief.

Particularly with assessments that rest on correlations present across several degrees of data, the potential for alternative explanations must always be considered. While the concepts themselves are common across intelligence methodologies, these are of paramount importance in properly understanding and assessing the “information” created through assessment of correlated data.

Abduction, perhaps the least known in popular culture, represents the most relevant form of inferential reasoning for the ABI analyst. It is also the form of reasoning most commonly employed by Sir Arthur Conan Doyle’s Sherlock Holmes, despite references to Holmes as the master of deduction. Abduction can be thought of as “inference to the best explanation,” where rather than a conclusion guaranteed by the premises, the conclusion is expressed as a “best guess” based on background knowledge and specific observations.

5.7 Facts: What Is Known

Allowance must be made for uncertainty even in the identification of facts; even narrowly scoped, facts can turn out to be untrue for a variety of reasons. Despite this tension, distinguishing between facts and assessments is a useful mental exercise. It also serves to introduce the concept of a key assumption check (KAC) into ABI, as what ABI terms “facts” overlaps some with what other intelligence methodologies term “assumptions.”
Another useful way to conceptualize facts is “information as reported from the primary source.”

5.8 Assessments: What Is Believed or “Thought”

Assessment is where the specific becomes general. Assessment is one of the key functions performed by intelligence analysts, and it is one of very few common attributes across functions, echelons, and types of analysts. It is also not, strictly speaking, the sole province of ABI.

5.9 Gaps: What Is Unknown

ABI identifies correlated data based on spatial and temporal co-occurrence, but it does not explicitly seek to assign meaning to the correlation or place it in a larger context.

There are times, however, when the method cannot even reach assessment level due to “getting stuck” during research of spatial and temporal correlations. This is where the concept of “unfinished threads’ becomes vitally important.

5.9 Gaps: What Is Unknown
The last piece of the assessment puzzle is “gaps.” This is in many ways the inverse of “facts” and can inform assessments as much as “facts” can. Gaps, like facts, must be stated as narrowly and explicitly as possible in order to identify areas for further research or where the collection of additional data is required.

Gap identification is a crucial skill for most analytic methods because of natural human tendencies to either ignore contradictory information or construct narratives that explain incomplete or missing information.

5.10 Unfinished Threads

Every time a prior initiates one or both of the principal methods discussed earlier in this chapter, an investigation begins.

True to its place in “discovery intelligence,” ABI not only makes allowances for the existence of these unfinished threads, it explicitly generates techniques to address these threads and uses them for the maximum benefit of the analytical process.

Unfinished threads are important for several reasons. First, they represent the institutionalization of the discovery process within ABI. Rather than force a process by which a finished product must be generated, ABI allows for the analyst to pause and even walk away from a particular line of inquiry for any number of reasons. Second, unfinished threads can at times lead an analyst into parallel or even completely unrelated threads that are as important, or even more important, than the initial thread. This process, called “thread hopping,” is one expression of a nonlinear workflow inside of ABI.

One of the most challenging problems presented by unfinished threads is preserving threads for later investigation. Methods for doing so are both technical (computer software designed to preserve these threads, discussed further in Chapter 15) and nontechnical, such as scrap paper, whiteboards, and pen-and-paper notebooks. This is particularly important when new information arrives, especially when the investigating analyst did not specifically request the new information.

By maintaining a discovery mindset and continuing to explore threads from various different sources of information, the full power of ABI—combined with the art and intuition present in the best analysts— can be realized.

5.11 Leaving Room for Art And Intuition
One of the hardest challenges for structured approaches to intelligence analysis is carving out a place for human intuition and, indeed, a bit of artistry. The difficulty of describing and near impossibility of teaching intuition make it tempting to omit it from any discussion of analytic methods in an effort to focus on that which is teachable. To do so, however, would be both unrealistic as well as a disservice to the critical role that intuition— properly understood and subject to appropriate —can play in the analytic process.

Interpretation is an innate, universal, and quintessentially intuitive human faculty. It is field-specific, in the sense that one’s being good at interpreting, say, faces or pictures or modern poetry does not guarantee success at interpreting contracts or statues. It is not a rule-bound activity, and the reason a judge is likely to be a better interpreter of a statute than of a poem, and a literary critic a better interpreter of a poem than a statute, is the experience creates a repository of buried knowledge on which intuition can draw when one is faced with a new interpretandum – Judge Richard Posner

At all times, however, these intuitions must be subject to rigorous scrutiny and cross-checking, to ensure their validity is supported by evidence and that alternative or “chance” explanations cannot also account for the spatial or temporal connections in data.
Fundamentally, there is a role for structured thinking about problems, application of documented techniques, and artistry and intuition when examining correlations in spatial and temporal data. Practice in these techniques and practical application that builds experience are both equally valuable in developing the skills of an ABI practitioner.

6

Disambiguation and Entity Resolution

Entity resolution or disambiguation through multi-INT correlation is a primary function of ABI. Entities and their activities, however, are rarely directly observable across multiple phenomenologies. Thus, we need an approach that considers proxies —indirect representations of entities—which are often directly observable through various means.

6.1 A World of Proxies

As entities are a central focus of ABI, all of an entity’s attributes are potentially relevant to the analytical process. That said, a subset of attributes called proxies is the focus of analysis as described in Chapter 5. A proxy “is an observable identifier used as a substitute for an entity, limited by its durability (i.e., influenced by the entity’s ability to change/alter proxies)”

Focusing on any particular, “average” entity results in a manageable number of proxies.2 However, beginning with a given entity is fundamentally a problem of “knowns.” How can an analyst identify an “unknown entity?”

Now the problem becomes more acute. Without using a given entity to filter potential proxies, all proxies must be considered; this number is likely very large and for the purposes of this chapter is n. The challenge that ABI’s spatio-temporal methodology confronts is going from n, or all proxies, to a subset of n that relates to an individual or group of individuals. In some cases, n can be as limited as a single proxy. The process of moving from n to the subset of n is called disambiguation.

6.2 Disambiguation

Disambiguation is not a concept unique to ABI. Indeed, it is something most people do every day in a variety of settings, for a variety of different reasons. A simple example of disambiguation is using facial features to disambiguate between two different people. This basic staple of human interaction is so important that an inability to do so is a named disorder—prosopagnosia.

Disambiguation is a conceptually simple process; accordingly, the actual process of disambiguation is severely complicated by incomplete, erroneous, misleading, or insufficiently specific data.

Without discounting the utility of more “general” proxies like appearance and clothing and vehicle types, it is the “unique” identifiers that offer the most probative value in the process of disambiguation and that, ultimately, are most useful in achieving the ultimate goal: entity resolution.

6.3 Unique Identifiers—“Better” Proxies

To understand fully why unique identifiers are of such importance to the analytical process in ABI, a concept extending the data ontology of “events” and “transactions” from Chapter 4 must be introduced. This concept is called certainty of identity.

This concept has a direct analog in the computing world—the universal unique identifier (UUID) or globally unique identifier (GUID) [3, 4]. In distributed computing environments—linking together disparate databases— UUIDs or GUIDs are the mechanism to disambiguate objects in the computing environments [4]. This is done against the backdrop of massive data stores from various different sources in the computing and database world.
In ABI, the same concept is applied to the “world’s” spatiotemporal data store: Space and time provide the functions to associate unique identifiers (proxies) with each other and with entities. The proxies can then be used to identify the same entity across multiple data sources, allowing for a highly accurate understanding of an entity’s movement and therefore behavior.

6.4 Resolving the Entity

As the core of ABI’s analytic methodology revolves around discovering entities through spatial and temporal correlations in large data sets across multiple INTs, the process of entity resolution principally through spatial and temporal attributes is the defining attribute of ABI’s analytical methodology and represents ABI’s enduring contribution to the overall discipline of intelligence analysis.

Entity resolution is “the iterative and additive process of uniquely identifying and characterizing an [entity], known or unknown, through the process of correlating event/transaction data generated by proxies to the [entity]”.

Entity resolution itself is not unique to ABI. Data mining and database efforts in computer science focus intense amounts of effort on entity resolution. These efforts are known by a number of different terms (e.g., record linkage, de-duplication, and co-reference resolution), but all focus on “the problem of extracting, matching, and resolving entity mentions in structured and unstructured data”.

In ABI, “entity mentions” are contained within activity data. This encompasses both events and transactions, as both can involve a specific detection of a proxy. As shown in Figure 6.4, transactions always involve proxies at the endpoints, or “vertices” of the transaction. Events also provide proxies, but these can range from general (example, a georeferenced report stating that a particular house is an entity’s residence) to highly specific (a time-stamped detection of a radio-frequency identification tag at a given location).

6.5 Two Basic Types of Entity Resolution

Ultimately, the process of entity resolution can be broken into two categories: proxy-to-entity resolution and proxy-to-proxy resolution. Both types have specific use cases in ABI and can provide valuable information pertinent to an entity of interest, ultimately helping answer intelligence questions.

6.5.1 Proxy-to-Proxy Resolution

Proxy-to-proxy resolution through spatiotemporal correlation is not just an important aspect of ABI; it is one of the defining concepts of ABI. But why is this? At face value, entity resolution is ultimately the goal of ABI. Therefore, how does resolving one proxy to another proxy help advance understanding of an entity and relate it to its relevant proxies?

The answer is found at the beginning of this chapter: entities cannot be directly observed. Therefore, any kind of resolution must by definition be relating one proxy to another proxy, through space and time and across multiple domains of information.

What the various sizes of circles introduce is the concept of CEP (Figure 6.5). CEP was originally introduced as a measure of accuracy in ballistics, representing the radius of the circle within which 50% of “rounds” or “warheads” were expected to fall. A smaller CEP indicated a more accurate weapon system. This concept has been expanded to represent the accuracy of geolocation of any item (not just a shell or round from a weapons system), particularly with the proliferation of GPS-based locations [9]. Even systems such as GPS, which are designed to provide accurate geolocations, have some degree of error.

This simple example illustrates the power of multiple observations over space and time for proper disambiguation and resolving proxies from one data source to proxies from another data source. This was a simplistic thought experiment. The bounds were clearly defined, and there was a 1:1 ratio of Vehicles:Unique Identifiers, both of which were of a known quantity (four each). Real-world conditions and problems will rarely present such clean results for an analyst or for a computer algorithm. The methods and techniques for entity disambiguation over space and time have been extensively researched over the past 30 years by the multisensor data fusion community.

6.5.2 Proxy-to-Entity Resolution: Indexing

While proxy-to-proxy resolution is at the heart of ABI, the importance of proxy-to-entity resolution, or indexing, cannot be overstated. Indexing is a broad term used for various processes, most outside the strict bounds of ABI, that help link proxies to entities through a variety of technical and nontechnical means. Indexing takes placed based on values within single information sources (rather than across them) and is often done in the process of focused exploitation on a single source or type of data.

Indexing is essentially an automated way of helping analysts build up an understanding of attributes of an entity. In intelligence, this often focuses around a particular collection mechanism or phenomenology; the same is true with regard to law enforcement and public records, where vehicle registration databases, RFID toll road passes, and other useful information is binned according to data class and searchable using relational keys. While not directly a part of the ABI analytic process, access to these databases provides analysts with an important advantage in determining potential entities to resolve to proxies in the environment.

6.6 Iterative Resolution and Limitations on Entity Resolution

Even the best proxies, however, have limitations. This is why we refer to the relevant items as proxies instead of signatures in ABI. A signature is something characteristic, indicative of identity. Most importantly, signatures have inherent meaning, typically detectable in a single phenomenology or domain of information. Proxies, however, lack the same inherent meaning, though in everyday use, the two are often conflated – This, however, is not always the case.

These challenges necessitate the key concept of iterative resolution in ABI; in essence, practitioners must consistently re-examine proxies to determine whether they are still valid for entities of interest. By revisiting Figure 6.2, it is intuitively clear that certain proxies are easier to change, while others are far more difficult. When deliberate operations security (OPSEC) practices are introduced from terrorists, insurgents, intelligence officers, and other entities who are trained in countersurveillance and counterintelligence efforts, it can be even more challenging to evaluate the validity of a given proxy for an individual at an individual point in time. These limits on connecting proxies to entities describe perhaps the most prominent challenges

These limits on connecting proxies to entities describe perhaps the most prominent challenges
to disambiguation and entity resolution amongst very similar proxies: the concept of discreteness, relative to physical location, and durability, relative to proxies. Together these capture the limitations of the modern world that are passed through to the analytical process underpinning ABI.

7

Discreteness and Durability in the Analytical Process

The two most important factors in ABI analysis are the discreteness of locations and durability of proxies. For shorthand, these two concepts are often referred to simply as discreteness and durability. Discreteness of locations deals with the different properties of physical locations, focusing on the use of particular locations by entities and groups of entities that can be expected to interact with given locations, taking into account factors like climate, time of day, and cultural norms. Durability of proxies addresses an entity’s ability to change or alter given proxies and therefore, the analyst’s need to periodically revalidate or reconfirm the validity of a given proxy for an entity of interest.

7.1 Real World Limits of Disambiguation and Entity Resolution

Discreteness and durability are designed as umbrella terms: They help express the real-world limits of an analyst’s ability to disambiguate unique identifiers through space and time and ultimately, match proxies to entities and thereby perform entity resolution. They also present the two greatest challenges to attempts to automate the core precepts of ABI: Because the concepts are “fuzzy,” and there are no agreed-upon standards or scales used to express discreteness and durability, automating the resolution process remains a monumental challenge. This section illustrates general concepts with an eye toward use by human analysts.
disambiguation and entity resolution amongst very similar proxies: the concept of discreteness, relative to physical location, and durability, relative to proxies. Together these capture the limitations of the modern world that are passed through to the analytical process underpinning ABI.

7.2 Applying Discreteness to Space-Time

Understanding the application of discreteness (of location) to space-time begins with revisiting the concept of disambiguation.

Disambiguation is one of the most important processes for both algorithms and human beings, and one of the major challenges involves assigning confidence values (either qualitative or quantitative) to the results of disambiguation, particularly with respect to the character of given locations, geographic regions, or even particular structures.

But why does the character of a location matter? The answer is simple, even intuitive: Not all people, or entities, can access all locations, regions, or buildings. Thus, when discussing the discreteness value of a given location, whether it is measured qualitatively or quantitatively, the measure is always relative to an entity or group/network of entities.

Considering that the process of disambiguation begins with the full set of “all entities,” the ability to subsequently narrow the potential pool of entities generating observable proxies in a given location based on the entities who would have natural access to a given location can be an extraordinarily powerful tool in the analysis process.

ABI’s analytic process uses a simple spectrum to describe the general nature of given locations. This spectrum provides a starting point for more complex analyses, but the significant gap of a detailed quantitative framework to describe the complexity of locations remains. This is an open area for research and one of ABI’s true hard problems.

7.3 A Spectrum for Describing Locational Discreteness

In keeping with ABI’s development as a grassroots effort among intelligence analysts confronted with novel problems, a basic spectrum is used to divide locations into three categories of discreteness:

• Non-discrete

• Discrete

• Semi-discrete

The categories of discreteness are temporally sensitive, representing the dynamic and changing use of locations, facilities, and buildings on a daily, sometimes even hourly, basis. Culture, norms, and local customs all factor into the analytical “discreteness value” that aids ABI practitioners in evaluating the diagnosticity of a potential proxy-entity pair.

Evidence is diagnostic when it influences an analyst’s judgment on the relative likelihood of the various hypotheses. If an item of evidence seems consistent with all hypotheses, it may have no diagnostic value at all. It is a common experience to discover that the most available evidence really is not very helpful, as it can be reconciled with all the hypotheses.

This concept can be directly applied to disambiguation among proxies and resolving proxies to entities. Two critical questions are used to evaluate locational discreteness—the diagnosticity—of a given proxy observation. The first question is, “How many other proxies are present in this location and therefore may be resolved to entities through spatial co-occurrence?” This addresses the disambiguation function of ABI’s methodology. The second question is, “What is the likelihood that the presence of a given proxy at this location represents a portion of unique entity behavior?”

Despite these difficulties, multiple proxy observations over space and time (even at nondiscrete locations) can be chained together to produce the same kind of entity resolution [1]. An analyst would likely need additional observations at nondiscrete locations to provide increased confidence in an entity’s relationship to a location or to resolve an unresolved proxy to a given entity.

A discrete location is a location that is unique to an entity or network of entities at a given time. Observations of proxies at discrete locations, therefore, are far more diagnostic in nature because they are restricted to a highly limited entity network. The paramount example of a discrete location is a private residence.

Revisiting the two principal questions from above, the following characteristics emerge regarding a discrete location:

• Proxies present at a private residence can be associated with a small network of entities, the majority of whom are connected through direct relationships to the entity or entities residing at the location;

• Entities present at this location can presumptively be associated with the group of entities for whom the location is a principal residence.

As discussed earlier, discrete locations can be far from perfect.

7.4 Discreteness and Temporal Sensitivity

Temporal sensitivity with respect to discreteness is a concept used to describe how the use of locations by entities (and therefore the associated discreteness values) changes over time; the change in function affects a change in the associated discreteness. While this may seem quite abstract, it is actually a concept many are comfortable with from an early age.

when viewed at the macro level, the daily and subdaily variance in activity levels across multiple entities is referred to as a pattern of activity,

7.5 Durability of Proxy-Entity Associations

The durability of proxies remains the other major factor contributing to the difficulty of disambiguation and entity resolution

Though many proxies can (and are often) associated with single entities, these associations can range from nearly permanent to extraordinarily fleeting. The concept of durability represents the range of potential values for the duration of time of the proxy-entity association.

Answering “who-where-who” and “where-who-where” workflow questions becomes exponentially more difficult when varying degrees of uncertainty in spatial-temporal correlation are introduced by the two major factors discussed in this chapter. Accordingly, structured approaches for analysts to consider the effects of discreteness and durability are highly recommended, particularly as supporting material to overall analytical judgments.

One continuous recommendation in all types of intelligence analysis is that assumptions made in the analytical process should be made explicit, so that intelligence consumers can understand what is being assumed, what is being assessed, and how assessments might change based on changes in the underlying assumptions presented by an analyst [2, pp. 9, 16]. One recommended technique is using a matrix during the analytic process to make explicit discreteness and durability factors in an effort to incorporate them into the overall judgments and conclusions. In addition, the use of a matrix can provide key values that can later be used to develop quantitative expressions of uncertainty, but these expressions are fundamentally meaningless without the underlying quantifications clearly expressed (in essence, creating a “values index” so that the overall quantified value can be properly contextualized).

Above all, analysts must be continually encouraged by their leadership and intelligence consumers to clearly express uncertainty and “show their work.” Revealing flaws and weaknesses in a logical assessment are unfortunately often perceived as weakness, and this tendency is reinforced by consumers that attack probabilistic assessments and express desires for stronger, “less ambiguous” results of analyses. The limitations of all analytic methodologies must be expressed, but in ABI this becomes a particularly important point.

8

Patterns of Life and Activity Patterns

8.1 Entities and Patterns of Life

“Pattern(s) of life,” like many concepts in and around ABI, suffers from an inadequate written literature and varying applications depending on the speaker or writer’s context.

These concepts are familiar to law enforcement officers, who through direct surveillance techniques have constructed patterns of life on suspects for many years. One of the challenges in ABI explored in Section 8.2 is the use of indirect, sparse observations to construct entity patterns of life.

With discreteness, the varying uses of geographic locations over days, weeks, months, and even years is examined as part of the analytical process for ABI. Patterns of life are a related concept: A pattern of life is defined as the specific set of behaviors and movements associated with a particular entity over a given period of time. In simple terms, this is what people do everyday:

.At times, the term “pattern of life” has been used to describe behaviors associated with a specific object (for instance, a ship) as well as to describe the behaviors and activity observed in a particular location or region. An example would be criminal investigators staking out a suspect’s residence: They would learn the various comings and goings of many different entities, and see various activities taking place at the residence. In essence, they are observing small portions of individual patterns of life from many different entities, but the totality of this activity is sometimes also described in the same way.

One truth about patterns of life is that they cannot be observed or fully understood through periodic observations.

In sum, four important principles emerge regarding the formerly nebulous concept of “pattern of life”:

1. A pattern of life is specific to an individual entity;
2. Longer observations provide better insight into an entity’s overall pattern of life;
3. Even the lengthiest surveillance cannot observe the totality of an individual’s pattern of life;
4. Traditional means of information gathering and intelligence collection reveal only a snapshot of an entity’s pattern of life.

While it can be tempting to generalize or assume on the basis of what is observed, it is important to account for the possibilities during times in which an entity goes unobserved by technical or human collection mechanisms. In the context of law enforcement, the manpower cost of around-the-clock surveillance quickly emerges, and the need for officers to be reassigned to other tasks and investigate other crimes can quickly take precedence over the need to maintain surveillance on a given entity. Naturally, the advantage of technical collection versus human collection in terms of temporal persistence is evident.

Small pieces of a puzzle, however, are still useful. So too are different ways of measuring the day-to-day activities conducted by specific entities of interest (e.g., Internet usage, driving habits, and phone calls). Commercial marketers have long since taken advantage of this kind of data in order to more precisely target advertisements and directed sales pitches. However, these sub-aspects of an entity’s pattern of life are important in their own right and are the building blocks from which an overall pattern of life can be constructed.

8.2 Pattern-of-Life Elements

Pattern-of-life elements are the “building blocks” of a pattern of life. These elements can be measured in one or many different dimensions, each providing unique insight about entity behavior and ultimately contributing to a more complex overall picture of an entity. These elements can be broken down into two major categories:

• Partial observations, where the entity is observed for a fixed duration of time;

• Single-dimension measurements, where a particular aspect of behavior or activity is measured over time in order to provide insight into that specific dimension of entity behavior or activity.

The limitations of the sensor platform (resolution, spectrum, field of view) all play a role in the operator’s ability to assess whether the individual who emerged later was the same individual entity who entered the room, but even a high-confidence assessment is still an assessment, and there remains a nonzero chance that the entity of interest did not emerge from the room at all.

8.3 The Importance of Activity Patterns

activity patterns constructed from data sets containing multiple entities will not be effective tools for disambiguation.
Understanding the concept and implications of data aggregation is important in assessing both the utility and limitations of activity patterns. The first and most important rule of data aggregation is that aggregated data represents a summary of the original data set. Regardless of aggregation technique, no summary of data can (by definition) be as precise or accurate as the original set of data. Therefore, activity patterns constructed from data sets containing multiple entities will not be effective tools for disambiguation.

Effective disambiguation requires precise data, and summarized activity patterns cannot provide this.

If activity patterns—large sets of data containing summarized activity or movement from multiple entities—are not useful for disambiguation, why mention them at all in the context of ABI? There are two primary reasons.

One is that on a fairly frequent basis, activity patterns are mistakenly characterized as patterns of life without properly distinguishing the boundary between specific behavior of an individual and the general behaviors of a group of individuals [4, 5].

The second reason is that despite this confusion, activity patterns can play an important role in the analytical process: They provide an understanding of the larger context in which a specific activity occurs.

8.4 Normalcy and Intelligence

“Normal” or “abnormal” are descriptors that appear often in discussions regarding ABI. Examining the descriptions at a deeper level, however, reveals that these descriptors are often applied to activity pattern analysis, an approach to analysis distinct from ABI. The basis in logic works as follows:

• Understand and “baseline” what is normal;

• Alert when a change is made (thus, when “abnormal” occurs).

Cynthia Grabo, a former senior analyst at the Defense Intelligence Agency, defines warning intelligence as dealing with:
(a) direct action by hostile states against the U.S. or its allies, involving the commitment of their regular or irregular armed forces
(b) other developments, particularly conflicts, affecting U.S. security interests in which such hostile states are or might become involved
(c) significant military action between other nations not allied with the U.S., and
(d) the threat of terrorist action” [6].

Thus, warning is primarily concerned with what may happen in the future.

8.5 Representing Patterns of Life While Resolving Entities

Until this point, disambiguation/entity resolution and patterns of life have been discussed as separate concepts. In reality, however, the two processes often occur simultaneously. As analysts disambiguate proxies and ultimately resolve them to entities, pieces of an entity’s pattern of life are assembled. Once a proxy of interest is identified— even before entity resolution fully occurs—the process of monitoring a proxy creates observations: pattern-of-life elements.
8.5.1 Graph Representation

One of the most useful ways to preserve nonhierarchical information is in graph form. Rather than focus on specific technology, this section will describe briefly the concept of a graph representation and discuss benefits and drawbacks to the approach. Graphs have a number of advantages, but the single most relevant advantage is the ability to combine and represent relationships between data points from different sources.

8.5.2 Quantitative and Temporal Representation

With quantitative and temporal data, alternate views may be more appropriate. Here, traditional views of representing periodicity and aggregated activity patterns are ideal; this allows appropriate generalization across various time scales. Example uses of quantitative representation for single-dimensional measurements (a pattern-of-life element) include the representation of periodic activity.

Figure 8.5 is an example of how analysts can discern any potential correlations between activity levels and day of the week and make recommendations accordingly. This view of data would be considered a single-dimensional measurement, and thus a pattern of life element.

8.6 Enabling Action Through Patterns of Life

One important element missing from most discussions of pattern of life is “Why construct patterns of life at all?” Having an entity’s pattern of life, whether friendly, neutral, or hostile, is simply a means to an end, like all intelligence methodologies. The goal is not only to provide decision advantage at a strategic level but operational advantage at the tactical level.

Understanding events, transactions, and activity patterns also allows analysis to drive collection and identifies areas of significance where further collection operations can help reveal more information about previously hidden networks of entities. Patterns of life and pattern-of-life elements are just one representation of knowledge gained through the analytic process, ultimately contributing to overall decision advantage.

9

Incidental Collection

This chapter explores the concept of incidental collection by contrasting the change in the problem space: from Cold War–era order of battle to dynamic targets and human networks on the 21st century physical and virtual battlefields.

9.1 A Legacy of Targets

The modern intelligence system—in particular, technical intelligence collection capabilities—was constructed around a single adversary, the Soviet Union.

9.2 Bonus Collection from Known Targets

Incidental collection is a relatively new term, but it is not the first expression of the underlying concept. In imagery parlance, “bonus” collection has always been present, from the very first days of “standing target decks.” A simple example of this starts with a military garrison. The garrison might have several buildings for various purposes, including repair depots, vehicle storage, and barracks. In many cases, it might be located in the vicinity of a major population center, but with some separation depending on doctrine, geography, and other factors.

A satellite might periodically image this garrison, looking for vehicle movement, exercise starts, and other potentially significant activity. The garrison, however, only has an area of 5 km2, whereas the imaging satellite produces images that span almost 50 km by 10 km. The result, as shown in Figure 9.1, is that other locations outside of the garrison—the “primary target”—are captured on the image. This additional image area could include other structures, military targets, or locations of potential interest, all of which constitute “bonus” collection.
Incidental collection, rather than identifying a specific intelligence question as the requirement, focuses on the acquisition of large amounts data over a relevant spatial region or technical data type and sets volume of data obtained as a key metric of success. This helps address the looming problem of unknowns buried deep in activity data by maximizing the potential chances for spatiotemporal data correlations. Ultimately, this philosophy maximizes opportunities for proxy-entity pairing and entity resolution.

The Congressional Research Services concluded in 2013, “While the intelligence community is not entirely without its legacy ‘stovepipes,’ the challenge more than a decade after 9/11 is largely one of information overload, not information sharing. Analysts now face the task of connecting disparate, minute data points buried within large volumes of intelligence traffic shared between different intelligence agencies.

9.4 Dumpster Diving and Spatial Archive and Retrieval

In intelligence, collection is focused almost exclusively on the process of prioritizing and obtaining through technical means the data that should be available “next.” In other words, the focus is on what the satellite will collect tomorrow, as opposed to what it has already collected, from 10 years ago to 10 minutes ago. But vast troves of data are already collected, many of which are quickly triaged and then discarded as lacking intelligence value. ABI’s pillar of sequence neutrality emphasizes the importance of spatial correlations across breaks in time, so maintaining and maximizing utility from data already being collected for very different purposes is in effect a form of incidental collection.

Repurposing of existing data through application of ABI’s georeference to discover pillar is colloquially called “dumpster diving” by some analysts.

Repurposing data through the process of data conditioning (extracting spatial, temporal, and other key metadata features and indexing based on those features) is a form of incidental collection and is critical to ABI. This is because the information in many cases was collected to service-specific collection requirements and/or information needs but is then used to fill different information needs and generate new knowledge. Thus, the use of this repurposed data is incidental to the original collection intent. This process can be applied across all types of targeted, exquisite forms of intelligence. Individual data points, when aggregated into complete data sets, become incidentally collected data.

Trajectory Magazine wrote in its Winter 2012 issue, “A group of GEOINT analysts deployed to Iraq and Afghanistan began pulling intelligence disciplines together around the 2004–2006 timeframe…these analysts hit upon a concept called ‘geospatial multi-INT fusion.’” Analysts recognized that the one field that all data had in common was location.

9.5 Rethinking the Balance Between Tasking and Exploitation

Incidental collection has direct and disruptive implications for several pieces of the traditional TCPED cycle. The first and perhaps most significant is drastically re-examining the nature of the requirements and tasking process traditionally employed in most intelligence disciplines.

The current formal process for developing intelligence requirements was established after the Second World War, and remains largely in use today. It replaced an ad hoc, informal process of gathering intelligence and professionalized the business of developing requirements [7].

Like most formal intelligence processes, the legacy of Cold War intelligence requirements was tuned to the unique situation between 1945 and 1991, a bipolar contest between two major state adversaries: the United States and the Soviet Union. Thus the process was created with assumptions that, while true at the time, have become increasingly questionable in the unipolar world with numerous near-peer state competitors and increasingly powerful nonstate actors and organizations.
“satisficing”—collecting just enough that the requirement was fulfilled—required a clear understanding of the goals from the development of the requirement and management of the collection process. This, of course, meant that the information needs driving requirement generation, by definition, had to be clearly known, such that technical collection systems could be precisely tasked.

The shift of the modern era from clandestine and technical sensors to new, high-volume approaches to technical collection; wide-area and persistent sensors with long dwell times; and increasing use of massive volumes of information derived from open and commercial sources demands a parallel shift in emphasis of the tasking process. Because of the massive volumes of information gained from incidentally collected—or constructed—data sets, tasking is no longer the most important function. Rather, focusing increasingly taxed exploitation resources becomes critical; in addition, the careful application of automation to prepare data in an integrated fashion (performing functions like feature extraction, georeferencing, and semantic understanding) is necessary. “We must transition from a target-based, inductive approach to ISR that is centered on processing, exploitation, and dissemination to a problem-based, deductive, active, and anticipatory approach that focuses on end-to-end ISR operations,” according to Maj. Gen. John Shanahan, commander of the 25th Air Force who adds that automation is “a must have”.

Focusing on exploiting specific pieces of data is only one part of the puzzle. A new paradigm for collection must be coupled to the shift from tasking collection to tasking exploitation. Rather than seeking answers to predefined intelligence needs, collection attuned to ABI’s methodology demands seeking data, in order to enable correlations and entity resolution.

9.6 Collecting to Maximize Incidental Gain

The concept of broad collection requirements is not new. ABI, however, is fed by broad requirements for specific data, a new dichotomy not yet confronted by the intelligence and law enforcement communities. This necessitates a change in the tasking and collection paradigms employed in support of ABI, dubbed coarse tasking for discovery.

Decomposing this concept identifies two important parts: the first is the concept of coarse tasking, and the second is the concept of discovery. Coarse tasking first moves collection away from the historical use of collection decks consisting of point targets: specific locations on the Earth. These decks have been used for both airborne and space assets, providing a checklist of targets to service. Coverage of the target in a deck-based system constitutes task fulfillment, and the field of view for a sensor can in many cases cover multiple targets at once.

The tasking model used in collection decks is specific, not coarse, providing the most relevant point of contrast with collection specifically designed for supporting ABI analysis.

Rather than measuring fulfillment via a checklist model, coarse tasking’s goal is to maximize the amount of data (and as a corollary, the amount of potential correlations) in a given collection window. This is made possible because the analytic process of spatiotemporal correlation is what provides information and ultimately meaning to analysts, and the pillar of data neutrality does not force analysts to draw conclusions from any one source, instead relying on the correlations between sources to provide value. Thus, collection for ABI can be best measured through volume, identification and conditioning of relevant metadata features, and spatiotemporal referencing.

9.7 Incidental Collection and Privacy

This approach can raise serious concerns regarding privacy. “Collect it all, sort it out later” is an approach that, when applied to signals intelligence, raised grave concern about the potential for collection against U.S. citizens.

Incidental collection has been portrayed in a negative light with respect to the Section 215 metadata collection program [9]. Part of this, however, is a direct result of the failure of intelligence policy and social norms to keep up with the rapid pace of technological development.

U.S. intelligence agencies, by law, cannot operate domestically, with narrow exceptions carved out for disaster relief functions in supporting roles to lead federal agencies.

. While this book will not delve into textual analysis of existing law and policy, one issue that agencies will be forced to confront is the ability of commercial “big data” companies like Google and Amazon to conduct the kind of precision analysis formerly possible only in a government security context.

10

Data, Big Data, and Datafication

The principle of data neutrality espouses the use of new types of data in new ways. ABI represented a revolution in how intelligence analysts worked with a volume, velocity, and variety of data never before experienced.

10.1 Data

Deriving value from large volumes of disparate data is the primary objective of an intelligence analyst.

Data is comprised of the atomic facts, statistics, observations, measurements, and pieces of information that are the core commodity for knowledge workers like intelligence analysts. Data represents the things we know.
The discipline of intelligence used to be data-poor. The things we did not know, and the data we could not obtain far outnumbered the things we knew and the data we had. Today, the digital explosion complicates the work environment because there is so much data that it is simply not possible to gather, process, visualize, and understand it all. Historical intelligence textbooks describe techniques for reasoning through limited data sets and making informed judgments, but analysts today have the possibility to obtain exceedingly large quantities of data. The key skill now is the ability to triage, prioritize, and correlate information from a giant volume of data.

10.1.1 Classifying Data: Structured, Unstructured, and Semistructured

The first distinction in data management relies on classification of data into one of three categories: structured data, unstructured data, or semistructured data.

Structured Data

SQL works well with relational databases, but critics highlight the lack of portability of SQL queries across RDBMSs from different vendors due to implementation nuances of relational principles and query languages.

As data tables grow in size (number of rows), performance is limited, because many calculations must search the entire table.
Unstructured Data
“Not only SQL” (NoSQL) is a database concept for modeling data that does not fit well into the tabular model in relational databases. There are two classifications of NoSQL databases, key-value and graph.

One of the advantages of NoSQL databases is the property of horizontal scalability, which is also called sharding. Sharding partitions the database into smaller elements based on the value of a field and distributes this to multiple nodes for storage and processing. This improves the performance of calculations and queries that can be processed as subelements of a larger problem using a model called “scatter-gather” where individual processing tasks are farmed out to distributed data storage locations and the resulting calculations are reaggregated and sent to a central location.

Semistructured Data

The term semistructured data is technically a subset of unstructured data and refers to tagged or taggable data that does not strictly follow a tabular or database record format. Examples include markup languages like XML and HTML where the data inside a file may be queried and analyzed with automated processes, but there is no simple query language that is universally applicable.

Semistructured databases do not require formal governance, but operating a large data enterprise without a governance model makes it difficult to find data and maintain interoperability across data sets.

10.1.2 Metadata

Metadata is usually defined glibly as “data about data.” The purpose of metadata is to organize, describe, and identify data. The schema of a database is one type of metadata. The categories of tags used for unstructured or semistructured data sets are also a type of metadata.

Metadata may include extracted or processed information from the actual content of the data.

Clip marks—analyst-annotated explanations of the content of the video—are considered metadata attached to the raw video stream.

Sometimes, the only common metadata between data sets is time and location. We consider these the central metadata values for ABI. The third primary metadata field is a unique identifier. This may include the ID of the individual piece of data or may be associated with a specific object or entity that has a unique identifier. Because one of the primary purposes of ABI is to disambiguate entities and because analytic judgments must be traced to the data used to create it, identifying data with unique identifiers (even across multiple databases) is key to enabling analysis techniques.

10.1.3 Taxonomies, Ontologies, and Folksonomies

A taxonomy is the systematic classification of information, usually into a hierarchical structure of entities of interest.

Because many military organizations and nation-state governments are hierarchical, they are easily modeled in a taxonomy. Also, because the type and classification of military forces (e.g., aircraft, armored infantry, and battleships.) are generally universal across different countries, the relative strength of two different countries is easily compared. Large businesses can be described using this type of information model. Taxonomies consist of classes but only one type of relationship: “is child/subordinate of.”

An ontology “provides a shared vocabulary that can be used to model a domain, that is, the type of objects and or concepts that exist and their properties and relations” (emphasis added) [6, p. 5]. Ontologies are formal and explicit, but unlike taxonomies, they need not be hierarchical.

Most modern problems have evolved from taxonomic classification to ontological classification to include the shared vocabulary for both objects and relationships. Ontologies pair well with the graph-based NoSQL database method. It is important to note that ontologies are formalized, which requires an existing body of knowledge about the problem and data elements.

With the proliferation of unstructured data, user-generated content, and democratized access to information management resources, the term folksonomy evolved to describe the method for collaboratively creating and translating tags to categorize information [7]. Unlike taxonomies and ontologies that are formalized, folksonomies evolve as user-generated tags are added to published content. Also, there is no hierarchical (parent-child) relationship between tags. This technique is useful for highly emergent or little understood problems where an analyst describes attributes of a problem, observations, detections, issues, or objects but the data does not fit an existing model. Over time, as standard practices and common terms are developed, a folksonomy may be evolved into an ontology that is formally governed

10.2 Big Data

Big data is an overarching term that refers to data sets so large and complex they cannot be stored, processed, or used with traditional information management techniques. Altamira’s John Eberhardt defines it as “any data collection that cannot be managed as a single instance”
10.2.1 Volume, Velocity, and Variety…

In 2001, Gartner analyst Doug Laney introduced the now ubiquitous three-dimensional characterization of “big data” as increasing in volume, velocity, and variety [13]:

Volume: The increase in the sheer number and size of records that must be indexed, managed, archived, and transmitted across information systems.

Velocity: The dramatic speed at which new data is being created and the speed at which processing and exploitation algorithms must execute to keep up with and extract value from data in real time. In the big data paradigm, “batch” processing of large data files is insufficient.

Variety: While traditional data was highly structured, organized, and seldom disseminated outside an organization, today’s data sets are mostly unstructured, schema-less, and evolutionary. The number and type of data sets considered for any analytic task is growing rapidly.

Since Laney’s original description of “the three V’s,” a number of additional “V’s” have been proposed to characterize big data problems. Some of these are described as follows:

Veracity: The truth and validity of the data in question. This includes confidence, pedigree, and the ability to validate the results of processing algorithms applied across multiple data sets. Data is meaningless if it is wrong. Incorrect data leads to incorrect conclusions with serious consequences.

Vulnerability: The need to secure data from theft at rest and corruption in motion. Data analysis is meaningless if the integrity and security of the data cannot be guaranteed.

Visualization: Including techniques for making sense of “big data.” (See Chapter 13.)

Variability: The variations in meaning across multiple data sets. Different sources may use the same term to mean different things, or different terms may have the same semantic meaning.

Value: The end result of data analysis. The ability to extract meaningful and actionable conclusions with sufficient confidence to drive strategic actions. Ultimately, value drives the consequence of data and its usefulness to support decision making.

Because intelligence professionals are called on to make judgments, and because these judgments rely on the underlying data, any failure to discover, correlate, trust, understand, or interpret data or processed and derived data and metadata diminishes the value of the entire intelligence process.

10.2.2 Big Data Architecture

“Big data” definitions say that a fundamentally different approach to storage, management, and processing of data is required under this new paradigm, but what are some of the technology advances and system architectural distinctions to enable “big data?”

Most “big data” storage architectures use a key-value store based on Google’s BigTable. Accumulo is a variant of BigTable that was developed by the National Security Agency (NSA) beginning in 2008. Accumulo augments the BigTable data model to add cell-level security, which means that a user or algorithm seeking data from any cell in the database must satisfy a “column visibility” attribute of the primary key.

Hadoop relies on a distributed, scalable Java file system, the Hadoop distributed file system (HDFS), which stores large files (gigabytes to terabytes) across multiple nodes with replication to prevent data loss. Typically, the data is replicated three times, with two local copies and one copy at a geographically remote location.

Recognizing that information is increasingly produced by a number of high-volume, real-time devices and must be integrated and processed rapidly to derive value, IBM began the System S research project as “a programming model and an execution platform for user-developed applications that ingest, filter, analyze, and correlate potentially massive volumes of continuous data streams”.

10.2.3 Big Data in the Intelligence Community

Recognizing that information technology spending across the 17 intelligence agencies accounts for nearly 20% of National Intelligence Program funding, the intelligence community embarked on an ambitious consolidation program called the intelligence community information technology environment (IC-ITE), pronounced “eye-sight”

10.3 The Datafication of Intelligence

In 2013, Kenneth Neil Cukier and Victor Mayer-Schöenberger introduced the term “datafication” to describe the emergent transformation of everything to data. “Once we datafy things, we can transform their purpose and turn the information into new forms of value,” they said.

Over the last 10 years, direct application of commercial “big data” analytic techniques to the intelligence community has thus far missed the mark. There are a number of reasons for this, but first and foremost among them is the fact that a majority of commercial “big data” is exquisitely structured and represents near complete data sets. For example, the record of credit card transactions at a major department store includes only credit card transactions at that department store, and not random string of numbers that might be UPC codes for fruits and vegetables at a cross-town grocery store. In contrast, intelligence data is either typically unstructured text captured in narrative form or arrives as a mixture of widely differing structures.

Furthermore, the nature of intelligence collection—the quest to obtain information on an adversary’s plans and intentions through a number of collection disciplines—all but ensures that the resulting data sets are “sparse,” representing only a small portion or sample of the larger picture from which they are drawn. The difficulty is that unlike the algorithm-based methods applied to commercial big data, it is impossible to know the bounds of the larger data set. Reliable and consistent inference of larger trends and patterns from a limited and unbounded data set is impossible.

This does not mean intelligence professionals cannot learn from and benefit from the commercial sector’s experiences with big data. Indeed, industry has a great deal to offer with respect to data conditioning and system architecture. These aspects of commercial systems designed to enable “big data” analysis will be critical to designing the specialized systems needed to deal with the more complex and sparse types of data used by intelligence analysts.

10.3.1 Collecting It “All”

While commercial entities with consistent data sets may have success using algorithmic prediction of patterns based on dense data sets, the key common methodology between “big data” and ABI is the shift away from sampling information at periodic intervals toward examining massive amounts of information abductively and deductively to identify correlations.

Cukier and Mayer-Schoenberger, in their assessment of the advantages of “n = all,” effectively argue for a move to a more deductive workflow based on data correlations, rather than causation based on sparse data. “n = all” and georeference to discover share the common intellectual heritage predicated on collecting all data in order to focus on correlations in a small portion of the dataset: Collect broadly, condition data, and enable the analyst to both explore and ask intelligence questions of the data.

The approach of “n = all” is the centerpiece of former NSA director general Keith Alexander’s philosophy of “collect it all.” According to a former senior U.S. intelligence official, “rather than look for a single needle in the haystack, his approach was, ‘Let’s collect the whole haystack. Collect it all, tag it, store it… and whatever it is you want, you go searching for it’

10.3.2 Object-Based Production (OBP)

In 2013, Catherine Johnston, director of analysis at the Defense Intelligence Agency (DIA), introduced object-based production (OBP), a new way of organizing information in the datafication paradigm. Recognizing the need to adapt to growing complexity with diminishing resources, OBP implements data tagging, knowledge capture, and reporting by “organizing intelligence around objects of interest”.

OBP addresses several shortfalls. Studies have found that known information was poorly organized, partially because information was organized and compartmented by the owner. Reporting was within INT-specific stovepipes. Further compounding the problem, target-based intelligence aligned around known facilities.

An object- and activity-based paradigm is more dynamic. It includes objects that move, vehicles and people, for which known information must be updated in real time. This complicates timely reporting on the status and location of these objects and creates a confusing situational awareness picture when conflicting information is reported from multiple information owners.

QUELLFIRE is the intelligence community’s program to deliver OBP as an enterprise service where “all producers publish to a unifying object model” (UOM) [27, p. 6]. Under QUELLFIRE, OBP objects are incorporated into the overall common intelligence picture (CIP)/common operating picture (COP) to provide situational awareness.

This focus means that the pedigree of the information is time-dominant and must be continually updated. Additional work on standards and tradecraft must be developed to establish a persistent, long-term repository of worldwide intelligence objects and their behaviors.

10.3.3 Relationship Between OBP and ABI

There has been a general confusion about the differences between OBP and ABI, stemming from the fact that both methods focus on similar data types and are recognized as evolutions in tradecraft. OBP, which is primarily espoused by DIA, the nation’s all-source military intelligence organization, is focused on order-of-battle analysis, technical intelligence on military equipment, the status of military forces, and battle plans and intentions (essentially organizing the known entities). ABI, led by NGA, began with a focus on integrating multiple sources of geospatial information in a geographic region of interest—evolving with the tradecraft of georeference to discover—to the discovery and resolution of previously unknown entities based on their patterns of life. This tradecraft produces new objects for OBP to organize, monitor, warn against, and report… OBP, in turn, identifies knowledge gaps, the things that are unknown that become the focus of the ABI deductive, discovery-based process. Efforts to meld the two techniques are aided by the IC-ITE cloud initiative, which colocates data and improves discoverability of information through common metadata standards.

10.4 The Future of Data and Big Data

Former CIA director David Petraeus highlighted the challenges and opportunities of the Internet of Things in a 2012 speech at In-Q-Tel, the agency’s venture capital research group: “As you know, whereas machines in the 19th century learned to do, and those in the 20th century learned to think at a rudimentary level, in the 21st century, they are learning to perceive—to actually sense and respond” [33]. He further highlighted some of the enabling technologies developed by In-Q-Tel investment companies, listed as follows:

• Item identification, or devices engaged in tagging;
• Sensors and wireless sensor networks—devices that indeed sense and respond;
• Embedded systems—those that think and evaluate;
• Nanotechnology, allowing these devices to be small enough to function virtually anywhere.

In his remarks at the GigaOM Structure:Data conference in New York in 2013, CIA chief technology officer (CTO) Hunt said, “It is nearly within our grasp to compute on all human generated information” [35]. This presents new challenges but also new opportunities for intelligence analysis.

11

Collection

Collection is about gathering data to answer questions. This chapter summarizes the key domains of intelligence collection and introduces new concepts and technologies that have codeveloped with ABI methods. It provides a high-level overview of several key concepts, describes several types of collection important to ABI, and summarizes the value of persistent surveillance in ABI analysis.

11.1 Introduction to Collection

Collection is the process of defining information needs and gathering data to address those needs.

The overarching term for remotely collected information is ISR (intelligence, surveillance, and reconnaissance).

The traditional INT distinctions are described as follows:

• Human intelligence (HUMINT): The most traditional “spy” discipline, HUMINT is “a category of intelligence derived from information collected and provided by human sources” [1]. This information is gathered through interpersonal contact; conversations, interrogations, or other like means.

• Signals intelligence (SIGINT): Intelligence gathering by means of intercepting of signals. In modern times, this refers primarily to electronic signals.

• Communications intelligence (COMINT): A subdiscipline of SIGINT, COMINT refers to the collection of signals that involve the communication between people, defined by the Department of Defense (DoD) as “technical information and intelligence derived from foreign communications by other than the intended recipients” [2]. COMINT exploitation includes language translation.

• Electronic intelligence (ELINT): A subdiscipline of SIGINT, ELINT refers to SIGINT that is not directly involved in communications. An example includes the detection of an early-warning radar installation by means of sensing emitted radio frequency (RF) energy. (This is not COMINT, because the radar isn’t carrying a communications channel).

• Imagery intelligence (IMINT): Information derived from imagery to include aerial and satellite-based photography. The term “IMINT” has generally been superseded by “GEOINT.”

• Geospatial intelligence (GEOINT): A term coined in 2004 to include “imagery, IMINT, and geospatial information” [3], the term GEOINT reflects the concepts of fusion, integration, and layering of information about the Earth.

• Measurement and signals intelligence (MASINT): Technical intelligence gathering based on unique collection phenomena that focus on specialized signatures of targets or classes of targets.

• Open-source intelligence (OSINT): Intelligence derived from public, open information sources. This includes but is not limited to newspapers, magazines, speeches, radio stations, blogs, video-sharing sites, social-networking sites, and government reports.

Each agency was to produce INT-specific expert assessments of collected information that was then forwarded to the CIA for integrative analysis called all-source intelligence. The ABI principle of data neutrality posits that all sources of information should be considered equally as a sources of intelligence.

There are a number of additional subdisciplines under these headings including technical intelligence (TECHINT), acoustic intelligence (ACINT), financial intelligence (FININT), cyber intelligence (CYBINT), and foreign instrumentation intelligence (FISINT) [4].

Despite thousands of airborne surveillance sorties during 1991’s Operation Desert Storm, efforts to reliably locate Iraq’s mobile SCUD missiles were unsuccessful [5]. The problem was further compounded during counterterrorism and counterinsurgency operations in Iraq and Afghanistan where the targets of intelligence collection are characterized by weak signals, ambiguous signatures and dynamic movement. The ability to capture movement intelligence (MOVINT) is one collection modality that contributes to ABI, because it allows direct observation of events and collection of complete transactions.

11.5 Collection to Enable ABI

Traditional collection is targeted, whether the target is a human, a signal, or a geographic location. Since ABI is about gathering all the data and analyzing it with a deductive approach, an incidental collection approach as described in Chapter 9 is more appropriate.

11.6 Persistence: The All-Seeing Eye (?)

For over 2,000 years, military tactics have encouraged the use of the “high ground” for surveillance and reconnaissance of the enemy. From the use of hills and treetops to the advent of military ballooning in the U.S. Civil War to aerial and space-based reconnaissance, nations jockey for the ultimate surveillance high ground. The Department of Defense defines “persistent surveillance” as “a collection strategy that emphasizes the ability of some collection systems to linger on demand in an area to detect, locate, characterize, identify, track, target, and possibly provide battle damage assessment and retargeting in near or real time”.

Popular culture often depicts persistent collection like the all-seeing “Eye of Sauron” in Peter Jackson’s Lord of the Rings trilogy, the omnipresent computer in “Eagle Eye,” or the camera-filled casinos of Ocean’s Eleven, but persistence for intelligence is less about stare and more about sufficiency to answer questions.

In this textbook, persistence is the ability to maintain sufficient frequency, duration, temporal resolution, and spectral resolution to detect change, characterize activity, and observe behaviors. This chapter summarizes several types of persistent collection and introduces the concept of virtual persistence—the ability to maintain persistence of knowledge on a target or set of targets through integration of multiple sensing and analysis modalities.

11.7 The Persistence “Master Equation”

Persistence, P, can be defined in terms of eight fundamental factors:

where
(x, y) is the area coverage usually expressed in square kilometers.
z is the altitude, positive or negative, from the surface of the Earth.
T is the total time, duration, or dwell.
f (or t) is the frequency, exposure time, or revisit rate.

λ is the wavelength (of the electromagnetic spectrum) or the collection phenomenology. Δλ may also be used to represent the discretization of frequency for multisensor collects, spectral sensors, or other means.
σ is the accuracy or precision of the collection or analysis.

θ is the resolution or distinguishability. θ may also express the quality of the information. Π is the cumulative probability, belief, or confidence in the information.

Combinations of these factors contribute to enhanced persistence.

12

Automated Activity Extraction

The New York Times reported that data scientists “spend from 50 to 80 percent of their time mired in this more mundane labor of collecting and preparing unruly digital data, before it can be explored for useful nuggets” [1]. Pejoratively referred to in the article as “janitor work,” these tasks, also referred to as data wrangling, data munging, data farming, and data conditioning, inhibit progress toward analysis [2]. Conventional wisdom and repeated interviews with data analytics professionals support the “80%” notion [3–5]. Many of these tasks are routine and repetitive: reformatting data into different coordinate systems or data formats; manually tagging objects in imagery and video; backtracking vehicles from destination to origin; and extracting entities and objects in text.

A 2003 study by DARPA in collaboration with several U.S. intelligence agencies found that analysts spend nearly 60% of their time performing research and preparing data for analysis [7]. The so-called bathtub curve, shown in Figure 12.1, shows how a significant percentage of an analyst’s time is spent looking for data (research), formatting it for analysis, writing reports, and working on other administrative tasks. The DARPA study examined whether advances in formation technology such as collaboration and analysis tools could invert the “bathtub curve” so that analysts would spend less time wrestling with data and more time collaborating and performing analytic tasks, finding a significant benefit from new IT-enhanced methods.

As the volume, velocity, and variety of data sources available to intelligence analysts explodes, the problem of the “bathtub curve” gets worse.

12.2 Data Conditioning

Data conditioning is an overarching term describing the preparation of data for analysis and is often associated with “automation” because it involves automated processes to prepare data for analysis.

Historically, the phrase “extract, transform, load” (ETL) referred to a series of basic steps to prepare data for consumption by databases and data services. Often, nuanced ETL techniques were tied to a specific database architecture. Data conditioning includes the following:

• Extracting or obtaining the source data from various heterogeneous data sources or identifying a streaming data source (e.g., RSS feed) that provides continuous data input;
• Reformatting the data so that it is machine-readable and compliant with the target data model;
• Cleaning the data to remove erroneous records and adjusting date/time formats or geospatial coordinate systems to ensure consistency;
• Translating the language of data records as necessary;
• Correcting the data for various biases (e.g., geolocation errors);
• Enriching the data by adding derived metadata fields from the original source data (e.g., enriching spatial data to include a calculated country code);
• Tagging or labeling data with security, fitness-for-use, or other structural tags;
• Georeferencing the data to a consistent coordinate system or known physical locations;
• Loading the data into the target data store consistent with the data model and physical structure of the store;
• Validating that the conditioning steps have been done correctly and that queries produce results that meet mission criteria.

Data conditioning of source data into a spatiotemporal reference frame enables georeference to discover.

While the principle of data neutrality promotes data conditioning from multiple sources, this chapter focuses on a subset of automated activity extraction techniques including automated extraction and geolocation of entities/events from text, extraction of objects/activities from still imagery, and automated extraction of objects, features, and tracks from motion imagery.

12.3 Georeferenced Entity and Activity Extraction

While many applications perform automated text-parsing and entity extraction from unstructured text files, the simultaneous automated extraction of geospatial coordinates is central to enabling the ABI tradecraft of georeference to discover.

Marc Ubaldino, systems engineer and software developer at the MITRE Corporation, described a project called Event Horizon (EH) that “was borne out of an interest in trying to geospatially describe a volume of data—a lot of data, a lot of documents, a lot of things—and put them on a map for analysts to browse, and search, and understand, the details and also the trends” [9]. EH is a custom-developed tool to enable georeference to discover by creating a shapefile database of text documents that are georeferenced to a common coordinate system.

. These simple tools are chained together to orchestrate data conditioning and automated processing steps. According to MITRE, these tools have “decreased the human effort involved in correlating multi-source, multi-format intelligence” [10, p. 47]. This multimillion records corpus of data was first called the “giant load of intelligence” (GLINT). Later, this term evolved to geolocated intelligence.

One implementation of this method is the LocateXT software by ClearTerra, a “commercial technology for analyzing unstructured documents and extracting coordinate data, custom place names, and other critical information into GIS and other spatial viewing platforms” [11]. The tool scans unstructured text documents and features a flexible import utility for structured data (spreadsheets, delimited text). The tool supports all Microsoft Office documents (Word, PowerPoint, Excel), Adobe PDF, XML, HTML, Text, and more. Some of the tasks performed by LocateXT are described as follows [12]:

• Extracting geocoordinates, user-defined place names, dates, and other critical information from unstructured data;
• Identifying and extracting thousands of variations of geocoordinate formats;
• Creating geospatial layers from extracted locations;
• Configuring place name extraction using a geospatial layer or gazetteer file;
• Creating custom attributes by configuring keyword search and extraction controls.

12.4 Object and Activity Extraction from Still Imagery

Extraction of objects, features, and activities from imagery is a core element of GEOINT tradecraft and central to training as an imagery analyst. A number of tools and algorithms have been developed to aid in the manual, computer-assisted, and fully automated extraction from imagery. Feature extraction techniques for geoprocessing buildings, roads, trees, tunnels, and other features are widely applied to commercial imagery and used by civil engineers and city planners.

Most facial recognition approaches follow a four-stage model:
Detect
→ Align
→ Represent
→ Classify.
Much research is aimed at the classify step of the workflow. Facebook’s approach improves performance by applying three-dimensional modeling to the alignment step and deriving the facial representation using a deep neural network.

While Facebook’s research applies to universal face detection, classification in the context of the problem set is significantly easier. When the Facebook algorithm attempts to recognize individuals in submitted pictures, it has information about the “friends” to which the user is currently linked (in ABI parlance, a combination of contextual and relational information). It is much more likely that an individual in a user-provided photograph is related to the user through his or her social network. This property, called local partitioning, is useful for ABI. If an analyst can identify a subset of the data that is related to the target through one or more links (for example, a history of spatial locations previously visited), the dimensionality of the wide area search and targeting problem can be exponentially reduced.

“Recognizing activities requires observations over time, and recognition performance is a function of the discrimination power of a set of observational evidence relative to the structure of a specific activity set” [45]. They highlight the importance of increasingly proliferating persistent surveillance sensors and focus on activities identified by a critical geospatial, temporal, or interactive pattern in highly cluttered environments.

12.6.6 Detecting Anomalous Tracks

Another automation technique that can be applied to wide area data is the detection of anomalous behaviors— that is, “individual tracks where the track trajectory is anomalous compared to a model of typical behavior”

12.7 Metrics for Automated Algorithms

One of the major challenges in establishing revolutionary algorithms for automated activity extraction, identification, and correlation is the lack of standards with which to evaluate performance. DARPA’s PerSEAS program introduced several candidate metrics that are broadly applicable across this class of algorithms…

12.8 The Need for Multiple, Complimentary Sources

In signal processing and sensor theory, the most prevalent descriptive plot is the receiver operating characteristic (ROC) curve, a plot of true positive rate or probability of detection versus FAR.

12.9 Summary

Speaking at the Space Foundation’s National Space Symposium in May 2014, DNI James Clapper said “We will have systems that are capable of persistence: staring at a place for an extended period of time to detect activity; to understand patterns of life; to warn us when a pattern is broken, when the abnormal happens; and even to use ABI methodologies to predict future actions”

The increasing volume, velocity, and variety of “big data” introduced in Chapter 10 requires implementation of automated algorithms for data conditioning, activity/event extraction from unstructured data, object/activity extraction from imagery, and automated detection/tracking from motion imagery.

On the other hand, “Deus ex machina,” Latin for “god from the machine,” is a term from literature when a seemingly impossible and complex situation is resolved with irrational or divine means. Increasingly sophisticated “advanced analytic” algorithms provide the potential to disconnect analysts from the data by simply placing trust in the “magical black box.” In practice, no analyst will trust any piece of data without documented provenance and without understanding exactly how it was collected or processed.

Automation also removes the analyst from the art of performing analysis. Early in the development of the ABI methodology, analysts were forced to do the dumpster diving and “data janitorial work” to condition their own data for analysis. In the course of doing so, analysts were close to each individual record, becoming intimately familiar with the metadata. Often, analysts stumbled upon anomalies or patterns in the course of doing this work. Automated data conditioning algorithms may reformat and “clean” data to remove outliers—but as any statistician knows—all the interesting behaviors are in the tails of the distribution.

13

Analysis and Visualization

Analysis of large data sets increasingly requires a strong foundation in statistics and visualization. This chapter introduces the key concepts behind data science and visual analytics. It demonstrates key statistical, visual, and spatial techniques for analysis of large-scale data sets. The chapter provides many examples of visual interfaces used to understand and analyze large data sets.

13.1 Introduction to Analysis and Visualization

Analysis is defined as “a careful study of something to learn about its parts, what they do, and how they are related to each other”

The core competency of the discipline of intelligence is to perform analysis, deconstructing complex mysteries to understand what is happening and why. Figure 13.1 highlights key functional terms for analysis and the relative benefit/effort required for each.

13.1.1 The Sexiest Job of the 21st Century…

Big-budget motion pictures seldom glamorize the careers of statisticians, operations researchers, and intelligence analysts. Analysts are not used to being called “sexy,” but in a 2012 article in the Harvard Business Review, Thomas Davenport and D. J. Patil called out the data scientist as “the sexiest job of the 21st century” [2]. The term was first coined around 2008 to recognize the emerging job roles associated with large-scale data analytics at companies like Google, Facebook, and LinkedIn. Combining the skills of a statistician, a computer scientist, and a software engineer, the proliferation of data science across commercial and government sectors recognizes that competitive organizations are deriving significant value from data analysis. Today we’re seeing an integration of data science and intelligence analysis, as intelligence professionals are being driven to discover answers in those giant haystacks of unstructured data.

According to Leek, Peng, and Caffo, the key tasks for data scientists are the following [4, p. 2].

• Defining the question;
• Defining the ideal data set;
• Obtaining and cleaning the data;
• Performing exploratory data analysis;
• Performing statistical prediction/modeling;
• Interpreting results;
• Challenging results;
• Synthesizing and writing up and distributing results.

Each of these tasks presents unique challenges. Often, the most difficult step of the analysis process is defining the question, which, in turn, drives the type of data needed to answer it. In a data-poor environment, the most time-consuming step was usually the collection of data; however, in a modern “big data” environment, a majority of analysts’ time is spent cleaning and conditioning the data for analysis. Many of the data sets—even publicly available ones—are seldom well-conditioned for instantaneous import and analysis. Often column headings, date formats, and even individual records may need reformatting before the data can even be viewed for the first time. Messy data is almost always an impetus to rapid analysis, and decision makers have little understanding of the chaotic data landscape experienced by the average data scientist.

13.1.2 Asking Questions and Getting Answers

The most important task for an intelligence analyst is determining what questions to ask. The traditional view of intelligence analysis places the onus of defining the question on the intelligence consumer, typically a policy maker.

Asking questions from a data-driven and intelligence problem–centric viewpoint is the central theme of this textbook and the core analytic focus for the ABI discipline. Sometimes, collected data limits the questions that may be asked. Unanswerable questions define additional data needs, either through collection of processing.

Analysis takes several forms, described as follows:

• Descriptive: Describe a set of data using statistical measures (e.g., census).
• Inferential: Develop trends and judgments about a larger population using a subset of data (e.g., exit polls).
• Predictive: Use a series of data observations to make predictions about the outcomes or behaviors of another situation (e.g., sporting event outcomes).
• Causal: Determine the impact on one variable when you change one more more variables (e.g., medical experimentation).
• Exploratory: Discover relationships and connections by examining data in bulk, sometimes without an initial question in mind (e.g., intelligence data).

Because the primary focus of ABI is discovery, the main branch of analysis applied in this textbook is exploratory analysis.

https://www.jmp.com/en_us/offers/statistical-analysis-software.html

13.2 Statistical Visualization

ABI analysis benefits from the combination of statistical processes and visualization. This section reviews some of the basic statistical functions that provide rapid insight into activities and behaviors.

13.2.1 Scatterplots

One of the most basic statistical tools used in data analysis and quality engineering is the scatterplot or scattergram, a two-dimensional Cartesian graph of two variables.

Correlation, discussed in detail in Chapter 14, is the statistical dependence between two variables in a data set.

13.2.2 Pareto Charts

Joseph Juran, a pioneer in quality engineering, developed the Pareto principle and named it after Italian economist Vilfredo Pareto. Also known as “the 80/20 rule,” the Pareto principle is a common rule of thumb that 80% of observations tend to come from 20% of the causes. In mathematics, this is manifest as a power law, also called the Pareto distribution whose cumulative distribution function is given as:

Where α, the Pareto index, is a number greater than 1 that defines the slope of the Pareto distribution. For an “80/20” power law, α ≈ 1.161. The power law curve appears in many natural processes, especially in information theory. It was popularized in Chris Anderson’s 2006 book The Long Tail: Why the Future of Business is Selling Less of More

A variation on the Pareto chart, called the “tornado chart,” is shown in Figure 13.6. Like the Pareto chart, bars indicate the significance of the contribution on the response but the bars are aligned about a central axis to show the direction of correlation between the independent and dependent variables.

Pareto charts are useful in formulating initial hypotheses about the possible dependence between two data sets or for identifying a collection strategy to reduce the standard error in a model. Statistical correlation using Pareto charts and the Pareto principle is one of the simplest methods for data-driven discovery of important relationships in real-world data sets.

13.2.3 Factor Profiling

Factor profiling examines the relationships between independent and dependent variables. The profiler in Figure 13.7 shows the predicted response (dependent variable) as each independent variable is changed while all others are held constant.

13.3 Visual Analytics

Visual analytics was defined by Thomas and Cook of the Pacific Northwest National Laboratory in 2005 as “the science of analytical reasoning facilitated by interactive visual interfaces” [8]. The emergent discipline combines statistical analysis techniques with increasingly colorful, dynamic, and interactive presentations of data. Intelligence analysts increasingly rely on software tools for visual analytics to understand trends, relationships and patterns in increasingly large and complex data sets. These methods are sometimes the only way to rapidly resolve entities and develop justifiable, traceable stories about what happened and what might happen next.

Large data volumes present several unique challenges. First, just transforming and loading the data is a cumbersome prospect. Most desktop tools are limited by the size of the data table that can be in memory, requiring partitioning before any analysis takes place. The a priori partitioning of a data set requires judgments about where the break points should be placed, and these may arbitrarily steer the analysis in the wrong direction. Large data sets also tend to exhibit “wash out” effects. The average data values make it very difficult to discern what is useful and what is not. In location data, many entities conduct perfectly normal transactions. Entities of interest exploit this effect to effectively hide in the noise.

As dimensionality increases, potential sources of causality and multivariable interactions also increase. This tends to wash out the relative contribution of each variable on the response. Again, another paradox arises: Arbitrarily limiting the data set means throwing out potentially interesting correlations before any analysis has taken place.

Analysts must take care to avoid visualization for the sake of visualization. Sometimes, the graphic doesn’t mean anything or reveal an interesting observation. Visualization pioneer Edward Tufte coined the term “chartjunk” to refer to these unnecessary visualizations in his 1983 book The Visual Display of Quantitative Information, saying:

The interior decoration of graphics generates a lot of ink that does not tell the viewer anything new. The purpose of decoration varies—to make the graphic appear more scientific and precise, to enliven the display, to give the designer an opportunity to exercise artistic skills. Regardless of its cause, it is all non-data-ink or redundant data-ink, and it is often chartjunk.

Michelle Borkin and Hanspeter Pfister of the Harvard School of Engineering and Applied Scientists studied over 5,000 charts and graphics from scientific papers, design blogs, newspapers, and government reports to identify characteristics of the most memorable ones. “A visualization will be instantly and overwhelmingly more memorable if it incorporates an image of a human-recognizable object—if it includes a photograph, people, cartoons, logos—any component that is not just an abstract data visualization,” says Pfister. “We learned that any time you have a graphic with one of those components, that’s the most dominant thing that affects the memorability”

13.4 Spatial Statistics and Visualization

the concept of putting data on a map to improve situational awareness and understanding may seem trite, but the first modern geospatial computer system was not proposed until 1968. While working for the Department of Forestry and Rural Development for the Government of Canada, Roger Tomlinson introduced the term “geographic information system” (now GIS) as a “computer-based system for the storage and manipulation of map-based land data”

13.4.1 Spatial Data Aggregation

A popular form of descriptive analysis using spatial statistical is the use of subdivided maps based on aggregated data. Typical uses include visualization of census data by tract, county, state, or other geographic boundaries.

Using a subset of data to made judgments about a larger population is called inferential analysis.

13.4.2 Tree Maps

Figure 13.10 shows a tree map of spatial data related to telephone call logs for a business traveler.1 A tree map is a technique for visualizing categorical, hierarchical data with nested rectangles

In the Map of the Market, the boxes are categorized by industry, sized by market capitalization, and colored by the change in the stock price. A financial analyst can instantly see that consumer staples are down and basic materials are up. The entire map turns red during a broad sell-off. Variations on the Map of the Market segment the visualization by time so analysts can view data in daily, weekly, monthly, or 52-week increments.

The tree map is a useful visualization for patterns—in this case transactional patterns categorized by location and recipient. The eye is naturally drawn to anomalies in color, shape, and grouping. These form the starting point for further analysis of activities and transactions, postulates of relationships between data elements, and hypothesis generation about the nature of the activities and transactions as illustrated above. While tree maps are not inherently spatial, this application shows how spatial analysis can be incorporated and how the spatial component of transactional data generates new questions and leads to further analysis.

This type of analysis reveals a number of other interesting things about the entity (and related entities) patterns of life elements. If all calls contain only two entities, then when entity A calls entity B, we know that both entities are (likely) not talking to someone else during that time.

13.4.3 Three-Dimensional Scatterplot Matrix

Three-dimensional colored dot plots are widely used in media and scientific visualization because they are complex and compelling. Although it seems reasonable to extend two-dimensional visualizations to three dimensions, these depictions are often visually overwhelming and seldom convey additional information that cannot be viewed using a combination of two-dimensional plots more easily synthesized by humans.

GeoTime is a spatial/temporal visualization tool that plays back spatially enabled data like a movie. It allows analysts to watch entities move from one location to another and interact through events and transactions. Patterns of life are also easily evident in this type of visualization.

Investigators and lawyers use GeoTime in criminal cases to show the data-driven story about an entity’s pattern of movements and activities

13.4.4 Spatial Storytelling

The latest technique incorporated into multi-INT tradecraft and visual analytics is the aspect of spatial storytelling: using data about time and place to animate a series of events. Several statistical analysis tools implemented storytelling or sequencing capabilities

Online spatial storytelling communities have developed as collaborative groups of data scientists and geospatial analysts combine their tradecraft with increasingly proliferated spatially-enabled data. The MapStory Foundation, a 501(c)3 educational organization founded in 2011 by social entrepreneur Chris Tucker developed an open, online platform for sharing stories about our world and how it develops over time.

13.5 The Way Ahead
Visualizing relationships across large, multidimensional data sets quickly requires more real estate than the average desktop computer. NGA’s 24-hour operations center, with a “knowledge wall” comprised of 56 eighty-inch monitors, was inspired by the television show “24”

There are several key technology areas that provide potential for another paradigm shift in how analysts work with data. Some of the benefits of these technological advances were highlighted by former CIA chief technology officer Gus Hunt at a 2010 forum on big data analytics:

Elegant, powerful, and easy-to-use tools and visualizations;
• Intelligent systems that learn from the user;
• Machines to do more of the heavy lifting;
• A move to correlation, not search;
• A “curiosity layer”—signifying machines that are curious on your behalf.

14

Correlation and Fusion

Correlation of multiple sources of data is central to the integration before exploitation pillar of ABI and was the first major analytic breakthrough in combating adversaries that lack signature and doctrine.

Fusion, whether accomplished by a computer or a trained analyst, is central to this task. The suggested readings for this chapter alone fill several bookshelves.

Data fusion has evolved over 40 years into a complete discipline in its own right. This chapter provides a high-level overview of several key concepts in the context of ABI processing and analysis while directing the reader to further detailed references on this ever evolving topic.

14.1 Correlation

Correlation is the tendency of two variables to be related to each other. ABI relies heavily on correlation between multiple sources of information to understand patterns of life and resolve entities. The terms “correlation” and “association” are closely related.

A central principle of ABI is the need to correlate data from multiple sources—data neutrality—without a priori regard for the significance of data. In ABI, correlation leads to discovery of significance.

Scottish philosopher David Hume, in his 1748 book An Enquiry Concerning Human Understanding, defined association in terms of resemblance, contiguity [in time and place], and causality Hume says, “The thinking on any object readily transports the mind to what is contiguous”—an eighteenth-century statement of georeference to discover [1].

14.1.1 Correlation Versus Causality

One of the most oft-quoted maxims in data analysis is “correlation does not imply causality.

Many doubters of data science and mathematics use this sentence to deny any analytic result, dismissing a statistically valid fact as “pish posh.” Correlation can be a powerful indicator of possible causality and a clue for analysts and researchers to continue an investigative hypothesis.

In Thinking, Fast and Slow, Kahneman notes that we “easily think associatively, we think metaphorically, we think causally, but statistics requires thinking about many things at once,” which is difficult for humans to do without great effort.

The only way to prove causality is through controlled experiments where all external influences are carefully controlled and their responses measured. The best example of controlled evaluation of causality is through pharmaceutical trials, where control groups, blind trials, and placebos are widely used.

In the discipline of intelligence, the ability to prove causality is effectively zero. Subjects of analysis are seldom participatory. Information is undersampled, incomplete, intermittent, erroneous, and cluttered. Knowledge lacks persistence. Sensors are biased. The most important subjects of analysis are intentionally trying to deceive you. Any medical trial conducted under these conditions would be laughably dismissed.

Remember: correlations are clues and indicators to dig deeper. Just as starts and stops are clues to begin transactional analysis at a location, the presence of a statistical correlation or a generic association between two factors is a hint to begin the process of deductive or abductive analysis there. Therefore, statistical analysis of data correlation is a powerful tool to combine information from multiple sources through valid, justifiable mathematical relationships, avoiding the human tendency to make subjective decisions based on known, unavoidable, irrational biases.

14.2 Fusion

The term “fusion” refers to “the process or result of joining two or more things together to form a single entity” [6]. Waltz and Llinas introduce the analogy of the human senses, which readily and automatically combine data from multiple perceptors (each with different measurement characteristics) to interpret the environment.

Fusion is the process of disambiguating of two or more objects, variables, measurements, or entities that asserts—with a defined confidence value—that the two elements are the same. Simply put, the difference between correlation and fusion is that correlation says “these two elements are related.” Fusion says “these two objects are the same.”

Data fusion “combines data from multiple sensors and related information to achieve more specific inferences than could be achieved by using a single, independent sensor”

The evolution of data fusion methods since the 1980s recognizes that fusion of information to improve decision making is a central process in many human endeavors, especially intelligence. Data fusion has been recognized as a mathematical discipline in its own right, and numerous conferences and textbooks have been dedicated to the subject.

The mathematical techniques for data fusion can be applied to many problems in information theory such as intelligence analysis and ABI. They highlight the often confusing terminology used by multiple communities (see Figure 14.1) that rely on similar mathematical techniques with related objectives. Target tracking, for example, is a critical enabler for ABI but is only a small subset of the techniques in data fusion and information fusion.

14.2.1 A Taxonomy for Fusion Techniques

Recognizing that “developing cost-effective multi-source information systems requires a standard method for specifying data fusion processing and control functions, interfaces, and associated databases,” the Joint Directors of Laboratories (JDL) proposed a general taxonomy for data fusion systems in the 1980s.
The fusion levels defined by the JDL are as follows:

• Source preprocessing, sometimes called level 0 processing, is data association and estimation below the object level. This step was added to the three-level model to reflect the need to combine elementary data (pixel level, signal level, character level) to determine an object’s characteristics. Detections are often categorized as level 0.
• Level 1 processing, object refinement, combines sensor data to estimate the attributes or characteristics of an object to determine position, velocity, trajectory, or identity. This data may also be used to estimate the future state of the object. Hall and Llinas include sensor alignment, association, correlation, correlation/tracking, and classification in level 1 processing [11].
• Level 2 processing, situation refinement, “attempts to develop a description of current relationships among entities and events in the context of their environment” [8, p. 9]. Contextual information about the object (e.g., class of object and kinematic properties), the environment (e.g., the object is present at zero altitude in a body of water), or other related objects (e.g., the object was observed coming from a destroyer) refines state estimation of the object. Behaviors, patterns, and normalcy are included in level 2 fusion.
• Level 3 processing, threat refinement or significance estimation, is a high-level fusion process that characterizes the object and draws inferences in the future based on models, scenarios, state information, and constraints. Most advanced fusion research focuses on reliable level 3 techniques. This level includes prediction of future events and states.
• Level 4 processing, process refinement, augmented the original model by recognizing that continued observations can feed back into fusion and estimation processes to improve overall system performance. This can include multiobjective optimization or new techniques to fuse data when sensors operate on vastly different timescales [12, p. 12].
• Level 5 processing, cognitive refinement or human/computer interface, recognizes the role of the user in the fusion process. Level 5 includes approaches for fusion visualization, cognitive computing, scenario analysis, information sharing, and collaborative decision making. Level 5 is where the analyst performs correlation and fusion for ABI.

The designation as “levels” may be confusing to apprentices in the field as there is no direct correlation to the “levels of knowledge” associated with knowledge management. The JDL fusion levels are more accurately termed categories; a single piece of information does not have to traverse all five “levels” to be considered fused.

According to Hall and Llinas, “the most mature area of data fusion process is level 1 processing,” and a majority of applications fall into or include this category. Level 1 processing relies on estimation techniques such as Kalman filters, MHT, or joint probabilistic data association.

Data fusion applications for detection, identification, characterization, extraction, location, and tracking of individual objects fall in level 1. Additional higher level techniques that consider the behaviors of that object in the context of its surroundings and possible courses of action are techniques associated with levels 2 and 3. These higher level processing methods are more akin to analytic “sensemaking” performed by humans, but computational architectures that perform mathematical fusion calculations may be capable of operating with greatly reduced decision timelines. A major concern of course is turning what amounts to decision authority to silicon-based processors and mathematical algorithms, especially when those algorithms are difficult to calibrate.

14.2.2 Architectures for Data Fusion

The voluminous literature on data fusion includes several architectures for data fusion that follow the same pattern. Hall and Llinas propose three alternatives:

1. “Direct fusion of sensor data;

2. Representation of sensor data via feature vectors, with subsequent fusion of the feature vectors;

3. Processing of each sensor to achieve high-level inferences or decisions, which are subsequently combined [8].”

14.3 Mathematical Correlation and Fusion Techniques

Most architectures and applications for multi-INT fusion, at their cores, rely on various mathematical techniques for conditional probability assessment, hypothesis management, and uncertainty quantification/propagation. The most basic and widely used of these techniques, Bayes’s theorem, Dempster-Shafer theory, and belief networks, are discussed in this section.

14.3.1 Bayesian Probability and Application of Bayes’s Theorem

One of the most widely used techniques in information theory and data fusion is Bayes’s theorem. Named after English clergyman Thomas Bayes who first documented it in 1763, the relation is statement of conditional probability and its dependence on prior information. Bayes’s theorem calculates the probability of an event, A, given information about event B and information about the likelihood of one event given the other. The standard form of Bayes’s theorem is given as:

where

P(A) is the prior probability, that is, the initial degree of belief in event A;

P(A|B) is the conditional probability of A given that event B occurred (also called the posterior probability in Bayes’s theorem);

P(B|A) is the conditional probability of B, given that event A occurred, also called the likelihood; P(B) is the probability of event B.

This equation is sometimes generalized as:

or, said as “the posterior is proportional to the likelihood times the prior” as:

Sometimes, Bayes’s theorem is used to compare two competing statements or hypotheses, and given as the form:

where P(¬A) is the probability of the initial belief against event A, or 1–P(A), and P(B|¬A) is the conditional probability or likelihood of B given that event A is false.

Taleb explains that this type of statistical thinking and inferential thinking is not intuitive to most humans due to evolution: “consider that in a primitive environment there is no consequential difference between the statements most killers are wild animals and most wild animals are killers”

In the world of prehistoric man, those who treated these statements as equivalent probably increased their probability of staying alive. In the world of statistics, these are two different statements that can be represented probabilistically. Bayes’s theorem is useful in calculating quantitative probabilities of events based on observations of other events, using the property of transitivity and priors to calculate unknown knowledge from that which is known. In ABI, it is used to formulate a probability-based reasoning tree for observable intelligence events.

Application of Bayes’s Theorem to Object Identification

The frequency or rarity of the objects in step 1 of Figure 14.7 is called the base rate. Numerous studies of probability theory and decision-making show that humans tend to overestimate the likelihood of events with low base rates. (This tends to explain why people gamble). Psychologists Amos Tversky and Daniel Kahneman refer to the tendency to overestimate salient, descriptive, and vivid information at the expense of contradictory statistical information as the representativeness heuristic [15].

The CIA examined Bayesian statistics in the 1960s and 1970s as an estimative technique in a series of articles in Studies in Intelligence. An advantage of the method noted by CIA researcher, Jack Zlotnick is that the analyst makes “sequence of explicit judgments on discrete units” of evidence rather than “a blend of deduction, insight, and inference from the body of evidence as a whole” [16]. He notes, “The research findings of some Bayesian psychologists seem to show that people are generally better at appraising a single item of evidence than at drawing inferences from the body of evidence considered in the aggregate” [17].

The process for Bayesian combination of probability distributions from multiple sensors to produce a fused entity identification is shown in Figure 14.8. Each individual sensor produces a declaration matrix, which is that sensor’s declarative view object’s identity based on its attributes—sensed characteristics, behaviors, or movement properties. The individual probabilities are combined jointly using the Bayesian formula. Decision logic is applied to select the MAP that represents the highest probabilities of correct identity. Decision rules can also be applied to threshold the MAP based on constraints or to apply additional deductive logic from other fusion processes. The resolved entity is declared (with an associated probability). When used with a properly designed multisensor data management system, this declaration maintains provenance back to the original sensor data.

Bayes’s formula provides a straightforward, easily programmed mathematical formulation for probabilistic combination of multiple sources of information; however, it does not provide a straightforward representation for a lack of information. A modification of Bayesian probability called Dempster-Shafer theory introduces additional factors to address this concern.

14.3.2 Dempster-Shafer Theory

Dempster-Shafer theory is a generalization of Bayesian probability based on the integration of two principles. The first is belief functions, which allow for the determination of belief from one question on the subjective probabilities for a related question. The degree to which the belief is transferrable depends on how related the two questions are and the reliability of the source [18]. The second principle is Dempster’s composition rule, which allows independent beliefs to be combined into an overall belief about each hypothesis [19]. According to Shafer, “The theory came to the attention of artificial intelligence (AI) researchers in the early 1980s, when they were trying to adapt probability theory to expert systems” [20]. Dempster-Shafer theory differs from the Bayesian approach in that the belief in a fact and the opposite of that fact does not need to sum to 1; that is, the method accounts for the possibility of “I don’t know.” This is a useful property for multisource fusion especially in the intelligence domain.

Multisensor fusion approaches use Dempster-Shafer theory to discriminate objects by treating observations from multiple sensors as belief functions based on the object and properties of the sensor. Instead of combining conditional probabilities for object identification as shown in Figure 14.8, the process for fusion proposed by Waltz and Llinas is modified for the Dempster-Shafer approach in Figure 14.9. Mass functions replace conditional probabilities, and Dempster’s combination rule accounts for the additional uncertainty when the sensor cannot resolve the target. The property of normalization by the null hypothesis is also important because it removes the incongruity associated with sensors that disagree.

Although this formulation adds more complexity, it is still easily programmed into a multisensor fusion system. The Dempster-Shafer technique can also be easily applied to quantify beliefs and uncertainty for multi-INT analysis including the beliefs of members of an integrated analytic working group.

In plain English, (14.10) says, “The joint belief in hypothesis H given evidence E1 and E2 is the sum of 1) the belief in H given confirmatory evidence from both sensors, 2) the belief in H given confirmatory evidence from sensor 1 [GEOINT] but with uncertainty about the result from sensor 2, and 3) the belief in H given confirmatory evidence from sensor 2 [SIGINT] but with uncertainty about the result from sensor 1.”

The final answer is normalized to remove dissonant values by dividing each belief by (1-d). The final beliefs are the following:

• Zazikistan is under a coup = 87.9%;
• Zazikistan is not under a coup = 7.8%;
• Unsure = 4.2%;
• Total = 100%.

Repeating the steps above, substituting E1*E2 for the first belief and E3 as the second belief, Dempster’s rule can again be used to combine the beliefs for the three sensors:

• Zazikistan is under a coup = 95.3%;
• Zazikistan is not under a coup = 4.2%;
• Unsure = 0.5%;
• Total = 100%.

In this case, because the HUMINT source has only contributes 0.2 toward the belief in H, the probability that Zazikistan is under a coup actually decreases slightly. Also, because this source has a reasonably low value of u, the uncertainty was further reduced.

While the belief in the coup hypothesis is 92.7%, the plausibility is slightly higher because the analyst must consider the belief in hypothesis H as well as the uncertainty in the outcome. The plausibility of a coup is 93%. Similarly, the plausibility in Hc also requires addition of the uncertainty: 7.3%. These values sum to greater than 100% because the uncertainty between H and Hc makes either outcome equally likely in the rare case that all four sensors produce faulty evidence.

14.3.3 Belief Networks

A belief network2 is a “a probabilistic graphical model (a type of statistical model) that represents a set of random variables and their conditional dependencies via a directed acyclic graph (DAG)” [22]. This technique allows chaining of conditional probabilities—calculated either using Bayesian theory or Dempster-Shafer theory—for a sequence of possible events. In one application, Paul Sticha and Dennis Buede of HumRRO and Richard Rees of the CIA developed APOLLO, a computational tool for reasoning through a decision-making process by evaluating probabilities in Bayesian networks. Bayes’s rule is used to multiply conditional probabilities across each edge of the graph to develop an overall probability for certain outcomes with the uncertainty for each explicitly quantified.

14.4 Multi-INT Fusion For ABI

Correlation and fusion is central to the analytic tradecraft of ABI. One application of multi-INT correlation is to use the strengths of one data source to compensate for the weaknesses in another. SIGINT, for example, is exceptionally accurate in verifying identity through proxies because many signals have unique identifiers that are broadcast in the signal like the Maritime Mobile Service Identity (MMSI) in the ship-based navigation system, AIS. Signals also may include temporal information, but SIGINT is accurate in the temporal domain because radio waves propagate at the speed of light—if sensors are equipped with precise timing capabilities, the exact time of the signal emanation is easily calculated. Unfortunately, because direction-finding and triangulation are usually required to locate the point of origin, SIGINT has measurable but significant errors in position (depending on the properties of the collection system). GEOINT on the other hand is exceptionally accurate in both space and time. A GEOINT collection platform knows when and where it was when it passively collected photons coming off a target. This error can be easily propagated to the ground using a sensor model.

The ability to correlate results of wide area collection with precise, entity resolving, narrow field-of-regard collection systems is an important use for ABI fusion and an area of ongoing research.

Hard/soft fusion is a promising area of research that enables validated correlation of information from structured remote sensing assets with human-focused data sources including the tacit knowledge of intelligence analysts. Gross et al. developed a framework for fusing hard and soft data under a university research initiative that included ground-based sensors, tips to law enforcement, and local news reports [28]. The University at Buffalo (UB) Center for Multisource Information Fusion (CMIF) is the leader of a multi-university research initiative (MURI) developing “a generalized framework, mathematical techniques, and test and evaluation methods to address the ingestion and harmonized fusion of hard and soft information in a distributed (networked) Level 1 and Level 2 data fusion environment”

14.5 Examples of Multi-INT Fusion Programs

In addition to numerous university programs developing fusion techniques and frameworks, automated fusion of multiple sources is an area of ongoing research and technology development, especially at DARPA, federally funded research and development corporations (FFRDCs), and the national labs.

14.5.1 Example: A Multi-INT Fusion Architecture

Simultaneously, existing knowledge from other sources (in the form of node and link data) and tracking of related entities is combined through association analysis to produce network information. The network provides context to the fused multi-INT entity/object tracks to enhance entity resolution. Although entity resolution can be performed at level 2, this example highlights the role of human-computer interaction (level 5 fusion) in integration-before-exploitation to resolve entities. Finally, feedback from the fused entity/object tracks is used to retask GEOINT resources for collection and tracking in areas of interest.

14.5.2 Example: The DARPA Insight Program

In 2010, DARPA instituted the Insight program to address a key shortfall for ISR systems: “the lack of a capability for automatic exploitation and cross-cueing of multi-intelligence (multi-INT) sources

Methods like Bayesian fusion and Dempster-Shafer theory are used to combine new information inputs from steps 3, 4, 7, and 8. Steps 2 and 6 involve feedback to the collection system based on correlation and analysis to obtain additional sensor-derived information to update object states and uncertainties.

The ambitious program seeks to “automatically exploit and cross-cue multi-INT sources” to improve decision timelines and automatically collect the next most important piece of information to improve object tracks, reduce uncertainty, or anticipate likely courses of action based on models of the threat and network.

14.6 Summary

Analysts practice correlation and fusion in their workflows—the art of multi-INT. However, there are numerous mathematical techniques for combining information with quantifiable precision. Uncertainty can be propagated through multiple calculations, giving analysts a hard, mathematically rigorous measurement of multisource data. The art and science of correlation do not play well together, and the art often wins over the science. Most analysts prefer to correlate information they “feel” is related. Efforts to integrate structured mathematical techniques with the human-centric process of developing judgments must be developed. Hybrid techniques that quantify results with science but leave room for interpretation may advance the tradecraft but are not widely used in ABI today.

15
Knowledge Management

Knowledge is value-added information that is integrated, synthesized, and contextualized to make comparisons, assess outcomes, establish relationships, and engage decision-makers in discussion. Although some texts make a distinction between data, information, knowledge, wisdom, and intelligence, we define knowledge as the totality of understanding gained through repeated analysis and synthesis of multiple souces of information over time. Knowledge is the essence of an intelligence professional and is how he or she answers questions about key intelligence issues. This chapter introduces elements of the wide-ranging discipline of knowledge management in the context of ABI tradecraft and analytic methods. Concepts for capturing tacit knowledge, linking data using dynamic graphs, and sharing intelligence across a complex, interconnected enterprise are discussed.

15.1 The Need for Knowledge Management

Knowledge management is a term that first appeared in the early 1990s, recognizing that the intellectual capital of an organization provided competitive advantage and must be managed and protected. Knowledge management is a comprehensive strategy to get the right information to the right people at the right time so they can do something about it. So-called intelligence failures seldom stem from the inability to collect information, but rather the ability to integrate intelligence with sufficient confidence to make decisions that matter.

Gartner’s Duhon defines knowledge management (KM) as:

a discipline that promotes an integrated approach to identifying, capturing, evaluating, retrieving, and sharing all of an enterprise’s information assets. These assets may include databases, documents, policies, procedures, and previously un-captured expertise and experience in individual workers [4].

This definition frames the discussion in this chapter. The ABI approach treats data and knowledge as an asset— and the principle of data neutrality says that all these assets should be considered as equally “important” in the analysis and discovery process. Some knowledge management approaches are concerned with knowledge capture, that is, the institutional retention of intellectual capital possessed by retiring employees. Others are concerned with knowledge transfer, the direct conveyance of such a body of knowledge from older to younger workers through observation, mentoring, comingling, or formal apprenticeships. Much of the documentation in the knowledge management field focuses on methods for interviewing subject matter experts or eliciting knowledge through interviews. While these are important issues in the intelligence community, “increasingly, the spawning of knowledge involves a partnership between human cognition and machine-based intelligence”.

15.1.1 Types of Knowledge

Knowledge is usually categorized into two types, explicit and tacit knowledge. Explicit knowledge is that which is formalized and codified. This is sometimes called “know what” and is much easier to document, store, retrieve, and manage. Knowledge management systems that focus only on the storage and retrieval of explicit knowledge are more accurately termed information management systems, as most explicit knowledge is stored in databases, memos, documents, reports, notes, and digital data files.

Tacit knowledge is intuitive knowledge based on experience, sometimes called “know-how.” Tacit knowledge is difficult to document, quantify, and communicate to another person. This type of knowledge is usually the most valuable in any organization. Lehaney notes that the only sustainable competitive advantage and “the locus of success in the new economy is not in the technology, but in the human mind and organizational memory” [6, p. 14]. Tacit knowledge is intensely contextual and personal. Most people are not aware of the tacit knowledge they inherently possess and have a difficult time quantifying what they “know” outside of explicit facts.

In the intelligence profession, explicit knowledge is easily documented in databases, but of greater concern is the ability to locate, integrate, and disseminate information held tacitly by experienced analysts. ABI requires analytic mastery of explicit knowledge about an adversary (and his or her activities and transactions) but also requires tacit knowledge of an analyst to understand why the activities and transactions are occurring.

While many of the methods in this textbook refer to the physical manipulation of explicit data, it is important to remember the need to focus on the “art” of multi-INT spatiotemporal analytics. Analysts exposed to repeated patterns of human activity develop a natural intuition to recognize anomalies and understand where to focus analytic effort. Sometimes, this knowledge is problem-, region-, or group-specific. More often than not, individual analysts have a particular knack or flair for understanding activities and transactions of certain types of entities. Often, tacit knowledge provides the turning point in a particularly difficult investigation or unravels the final clue in a particularly difficult intelligence mystery, but according to Meyer and Hutchinson, individuals tend to place more weight on concrete and vivid information over that which is intangible and ambiguous [7, p. 46]. Effectively translating ambiguous tacit knowledge like feelings and intuition into explicit information is critical in creating intelligence assessments. This is the primary paradox of tacit knowledge; it often has the greatest value but is the most difficult to elicit and apply.

Amassing facts rarely leads to innovative and creative breakthroughs. The most successful organizations are those that can leverage both types of knowledge for dissemination, reproduction, modification, access, and application throughout the organization.

15.2 Discovery of What We Know

As the amount of information available continues to grow, knowledge workers spend an increasing amount of their day messaging, tagging, creating documents, searching for information, and performing queries and other information-focused activities [9, p. 114]. New concepts are needed to enhance discovery and reduce the entropy associated with knowledge management.

15.2.1 Recommendation Engines

Content-based filtering identifies items based on an analysis of the item’s content as specified in metadata or description fields. Parsing algorithms extract common keywords to build a profile for each item (in our case, for each data element or knowledge object). Content-based filtering systems generally do not evaluate the quality, popularity, or utility of an item.

In collaborative filtering, “items are recommended based on the interests of a community of users without any analysis of item content” [10]. Collaborative filtering ties interest in items to particular users that have rated those items. This technique is used to identify similar users: the set of users with similar interests. In the intelligence case, these would be analysts with an interest in similar data.

A key to Amazon’s technology is the ability to calculate the related item table offline, storing this mapping structure, and then efficiently using this table in real time for each user based on current browsing history. This process is described in Figure 15.1. Items the customer has previously purchased, favorably reviewed, or items currently in the shopping cart are treated with greater affinity than items browsed and discarded. The “gift” flag is used to identify anomalous items purchased for another person with different interests so these purchases do not skew the personalized recommendation scheme.
In ABI knowledge management, knowledge about entities, locations, and objects is available through object metadata. Content-based filtering identifies similar items based on location, proximity, speed, or activities in space and time. Collaborative filtering can be used to discover analysts working on similar problems based on their queries, downloads, and exchanges of related content. This is an internal application of the “who-where” tradecraft, adding additional metadata into “what” the analysts are discovering and “why” they might need it.

15.2.2 Data Finds Data

An extension of the recommendation engine concept to next-generation knowledge management is an emergent concept introduced by Jeff Jonas and Lisa Sokol called “data finds data.” In contrast to traditional query-based information systems, Jonas and Sokol posit that if the system knew what the data meant, it would change the nature of data discovery by allowing systems to find related data and therefore interested users. They explain:

With interest in a soon-to-be-released book, a user searches Amazon.com for the title, but to no avail. The user decides to check every month until the book is released. Unfortunately for the user, the next time he checks, he finds that the book is not only sold out but now on back order, awaiting a second printing. When the data finds the data, the moment this book is available, this data point will discover the user’s original query and automatically email the user about the book’s availability.

Jonas, now chief scientist at IBM’s entity analytics group joined the firm after Big Blue acquired his company, SRD, in 2005. SRD developed data accumulation and alerting systems for Las Vegas casinos including non-obvious relationship analysis (NORA), famous for breaking the notorious MIT card counting ring in the best-selling book Bringing Down the House [12]. He postulates that knowledge-rich but discovery-poor organizations derive increasing wealth from connecting information across previously disconnected information silos using real-time “perpetual analytics” [13]. Instead of processing data using large bulk algorithms, each piece of data is examined and correlated on ingest for its relationship to all other accumulated content and knowledge in the system. Such a context-aware data ingest system is a computational embodiment of integrate before exploit, as each new piece of data is contextualized, sequence-neutrally of course, with the existing knowledge corpus across silos. Jonas says, “If a system does not assemble and persistent context as it comes to know it… the computational costs to reconstruct context after the facts are too high”

Jonas elaborated on the implication of these concepts in a 2011 interview after the fact: “There aren’t enough human beings on Earth to think of every smart question every day… every piece of data is the question. When a piece of data arrives, you want to take that piece of data and see how it relates to other pieces of data. It’s like a puzzle piece finding a puzzle” [15, 16]. Treating every piece of data as the question means treating data as queries and queries as data.

15.2.3 Queries as Data

Information requests and queries are themselves a powerful source of data that can be used to optimize knowledge management systems or assist the user in discovering content.

15.3 The Semantic Web

The semantic web is a proposed evolution of the World Wide Web from a document-based structured designed to be read by humans to a network of hyperlinked, machine-readable web pages that contain metadata about the content and how multiple pages are related to each other. The semantic web is about relationships.

Although the original concept was proposed in the 1960s, the term “semantic web” and its application to an evolution of the Internet was popularized by Tim Berners-Lee in a 2001 article in Scientific American. He posits that the semantic web “will usher in significant new functionality as machines become much better able to process and ‘understand’ the data that they merely display at present” [18].
The semantic web is based on several underlying technologies, but the two basic and powerful ones are the extensible markup language (XML) and the resource description framework (RDF).

15.3.1 XML

XML is a World Wide Web Consortium (W3C) standard for encoding documents that is both human-readable and machine-readable [19].

XML can also be used to create relational structures. Consider the example shown below where there are two data sets consisting of locations (locationDetails) and entities (entityDetails) (adapted from [20]):

<locationDetails>

<location ID=”L1”>
<cityName>Annandale</cityName>
<stateName>Virginia</stateName>
</location ID>
<location ID=”L2”>
<cityName>Los Angeles</cityName>
<stateName>California</stateName>
</location ID>
</locationDetails>
<entityDetails>
<entity locationRef=”L1”>
<entityName>Patrick Biltgen</entityName>
</entity>
<entity locationRef=”L2”>
<entityName>Stephen Ryan</entityName>
</entity>
</entityDetails>

Instead of including the location of each entity as an attribute within entityDetails, the structure above links each entity to a location using the attribute locationRef. This is similar to how a foreign key works in a relational database. One advantage to using this structure is that the two entities can be linked to multiple locations, especially when their location is a function of time and activity.

XML is a flexible, adaptable resource for creating documents that are context-aware and can be machine parsed using discovery and analytic algorithms.

15.4 Graphs for Knowledge and Discovery

Graphs are a mathematical construct consisting of nodes (vertices) and edges that connect them. .

Problems can be represented by graphs and analyzed using the discipline of graph theory. In information systems, graphs represent communications, information flows, library holdings, data models, or the relationships in a semantic web. In intelligence, graph models are used to represent processes, information flows, transactions, communications networks, order-of-battle, terrorist organizations, financial transactions, geospatial networks, and the pattern of movement of entities. Because of their widespread applicability and mathematical simplicity, graphs provide a powerful construct for ABI analytics.
Graphs are drawn with dots or circles representing each node and an arc or line between nodes to represent edges as shown in Figure 15.3. Directional graphs use arrows to depict the flow of information from one node to another. When graphs are used to represent semantic triplestores, the direction of the arrow indicates the direction of the relationship or how to read the simple sentence. Figure 5.8 introduced a three-column framework for documenting facts, assessments, and gaps. This information is depicted as a knowledge graph in Figure 15.3. Black nodes highlight known information, and gray nodes depict knowledge gaps. Arrow shading differentiates facts from assessments and gaps. Shaded lines show information with a temporal dependence like the fact that Jim used to live somewhere else (a knowledge gap because we don’t know where). Implicit relationships can also be documented using the knowledge graph: Figure 5.8 contains the fact “the coffee shop is two blocks away from Jim’s office.”
The knowledge graph readily depicts knowns and unknowns. Because this construct can also be depicted using XML tags or RDF triples, it also serves as a machine-readable construct that can be passed to algorithms for processing.

Graphs are useful as a construct for information discovery when the user doesn’t necessarily know the starting point for the query. By starting at any node (a tip-off ), an analyst can traverse the graph to find related information. This workflow is called “know-something-find-something.” A number of heuristics for graph-based search assist in the navigation and exploration of large, multidimensional graphs that are difficult to visualize and navigate manually.

Deductive reasoning techniques integrate with graph analytics through manual and algorithmic filtering to quickly answer questions and convey knowledge from the graph to the human analyst. Analysts filter by relationship type, time, or complex queries about the intersection between edges and nodes to rapidly identify known information and highlight knowledge gaps.

15.4.1 Graphs and Linked Data

Chapter 10 introduced graph databases as a NoSQL construct for storing data that requires a flexible, adaptable schema. Graphs—and graph databases—are a useful construct for indexing intelligence data that is often held across multiple databases without requiring complex table joins and tightly coupled databases.

Using linked data, an analyst working issue “ C” can quickly discover the map and report directly connected to C, as well as the additional reports linked to related objects. C can also be packaged as a “super object” that contains an instance of all linked data with some number of degrees of separation—calculated by the number of graph edges—away from the starting object. The super object is essentially a stack of relationships to the universal resource identifiers (URIs) for each of the related object, documented using RDF triples or XML tags.

15.4.2 Provenance

Provenance is the chronology of the creation, ownership, custody, change, location, and status of a data object. The term was originally used in relation to works of art to “provide contextual and circumstantial evidence for its original production or discovery, by establishing, as far as practicable, its later history, especially the sequences of its formal ownership, custody, and places of storage” [22]. In law, the concept of provenance refers to the “chain of custody” or the paper trail of evidence. This concept logically extends to the documentation of the history of change of data in a knowledge system.
The W3C implemented a standard for provenance in 2013, documenting it as “information about entities, activities, and people involved in producing a piece of data or thing, which can be used to form assessments about its quality, reliability or trustworthiness” [23]. The PROV-O standard is web ontology language 2.0 (OWL2) ontology that maps the PROV logical data model to RDF [24]. The ontology describes hundreds of classes, properties, and attributes.

Maintaining provenance across a knowledge graph is critical to assembling evidence against hypotheses. Each analytic conclusion must be traced back to each piece of data that contributed to the conclusion. Although ABI methods can be enhanced with automated analytic tools, analysts at the end of the decision chain need to understand how data was correlated, combined, and manipulated through the analysis and synthesis process. Ongoing efforts across the community seek to document a common standard for data interchange and provenance tracking.

15.4.3 Using Graphs for Multianalyst Collaboration

In the legacy, linear TCPED model, when two agencies wrote conflicting reports about the same object, both reports promulgated to the desk of the all-source analyst. He or she adjudicated the discrepancies based on past experience with both sources. Unfortunately, the incorrect report often persisted—to be discovered in the future by someone else—unless it was recalled. Using the graph construct to organize around objects makes it easier to discover discrepancies that can be more quickly and reliably resolved. When analyzing data spatially, these discrepancies are instantly visible to the all-source analyst because the same object simultaneously appears in two places or states. Everything happens somewhere, and everything happens in exactly one place.

15.5 Information and Knowledge Sharing

Intelligence Community Directive Number 501, Discovery and Dissemination or Retrieval of Information Within the Intelligence Community, was signed by the DNI on January 21, 2009. Designed to “foster an enduring culture of responsible sharing within an integrated IC,” the document introduced the term “responsibility to provide” and created a complex relationship with the traditional mantra of “need to know”.

It directed that all authorized information be made available and discoverable “by automated means” and encouraged data tagging of mission-specific information. ICD 501 also defined “stewards” for collection and analytic production as:

An appropriately cleared employee of an IC element, who is a senior official, designated by the head of that IC element to represent the [collection/analytic] activity that the IC element is authorized by law or executive order to conduct, and to make determinations regarding the dissemination to or the retrieval by authorized IC personnel of [information collected/analysis produced] by that activity [25].

With a focus on improving discovery and dissemination of information, rather than protecting or hoarding information from authorized users, data stewards gradually replace data owners in this new construct. The data doesn’t belong to a person or agency. It belongs to the intelligence community. When applied, this change in perspective has a dramatic impact on the perspectives of information.

Prusak notes that knowledge “clumps” in groups and connectivity of individuals into groups and networks wins over knowledge capture.

Organizations that promote the development of social networks and the free exchange of information witness the establishment of self-organizing knowledge groups. Bahra says that there are three main conditions to assist in knowledge sharing [29, p. 56]:

• Reciprocity: One helps a colleague, thinking that he or she will receive valuable knowledge in return (even in the future).
• Reputation: Reputation, or respect for one’s work and expertise, is power, especially in the intelligence community.
• Altruism: Self-gratification and a passion or interest about a topic.

These three factors contribute to the simplest yet most powerful transformative sharing concepts in the intelligence community.

15.6 Wikis, Blogs, Chat, and Sharing

The historical compartmented nature of the intelligence community and its “need to know” policy is often cited as an impetus to information sharing.

Andrus’s essay – “The Wiki and the Blog: Toward a Complex Adaptive Intelligence Community,” which postulated that “the intelligence community must be able to dynamically reinvent itself by continuously learning and adapting as the national security environment changes”- won the intelligence community’s Galileo Award and was partially responsible for the start-up of a classified Wiki based on the platform and structure of Wikipedia called Intellipedia [31]. Shortly after its launch, the tool was used to write a high-level intelligence assessment on Nigeria. Thomas Fingar, the former deputy director of National Intelligence for Analysis (DDNI/A) cited Intellipedia’s success in rapidly characterizing Iraqi insurgents’ use of chlorine in improvised explosive devices highlighting the lack of bureaucracy inherent in the self-organized model.

While Intellipedia is the primary source for collaborative, semiformalized information sharing on standing and emergent intelligence topics, most analysts collaborate informally using a combination of chat rooms, Microsoft SharePoint sites, and person-to-person chat messages.

Because the ABI tradecraft reduces the focus on producing static intelligence products to fill a queue, ABI analysts tend to collaborate and share around in-work intelligence products. These include knowledge graphs on adversary patterns of life, shape file databases, and other in-work depictions that are often not suitable as finished intelligence products. In fact, the notion of “ABI products” is a source of continued consternation as standards bodies attempt to define what is new and different about ABI products, as well as how to depict the dynamics of human patterns of life on what is often a static Powerpoint chart.
Managers like reports and charts as a metric of analytic output because the total number of reports is easy to measure; however, management begins to question the utility of “snapping a chalk line” on an unfinished pattern-of-life analysis just to document a “product.” Increasingly, interactive products that use dynamic maps and charts are used for spatial storytelling. Despite all the resources allocated to glitzy multimedia products and animated movies, these products are rarely used because they are time-consuming, expensive, and usually late to need

15.8 Summary

Knowledge management is a crucial enabler for ABI because tacit and explicit knowledge about activities, patterns, and entities must be discovered and correlated across multiple disparate holdings to enable the principle of data neutrality. Increasingly, new technologies like graph data stores, recommendation engines, provenance tracing, wikis, and blogs, contribute to the advancement of ABI because they enhance knowledge discovery and understanding. Chapter 16 describes approaches that leverage these types of knowledge to formulate models to test alternative hypotheses and explore what might happen.

16

Anticipatory Intelligence

After reading chapters on persistent surveillance, big data processing, automated extraction of activities, analysis, and knowledge management, you might be thinking that if we could just automate the steps of the workflow, intelligence problems would solve themselves. Nothing could be further from the truth. In some circles, ABI has been conflated with predictive analytics and automated sensor cross-cueing, but the real power of the ABI method is in producing anticipatory intelligence. Anticipation is about considering alternative futures and what might happen…not what will happen. This chapter describes technologies and methods for capturing knowledge to facilitate exploratory modeling, “what-if ” trades, and evaluation of alternative hypotheses.

16.1 Introduction to Anticipatory Intelligence

Anticipatory intelligence is a systemic way of thinking about the future that focuses our long range foveal and peripheral vision on emerging conditions, trends, and threats to national security. Anticipation is not about prediction or clairvoyance. It is about considering a space of potential alternatives and informing decision-makers on their likelihood and consequence. Modeling and simulation approaches aggregate knowledge on topics, indicators, trends, drivers, and outcomes into a theoretically sound, analytically valid framework for exploring alternatives and driving decision advantage. This chapter provides a survey of the voluminous approaches for translating data and knowledge into models, as well as various approaches for executing those models in a data-driven environment to produce traceable, valid, supportable assessments based on analytic relationships, validated models, and real-world data.

16.1.1 Prediction, Forecasting, and Anticipation

In a quote usually attributed to physicist Neils Bohr or baseball player Yogi Berra, “Prediction is hard, especially about the future.” The terms “prediction,” “forecasting,” and “anticipation” are often used interchangeably but represent significantly different perspectives, especially when applied to the domain of intelligence analysis.
A prediction is a statement of what will or is likely to happen in the future. Usually, predictions are given as a statement of fact: “in the future, we will all have flying cars.” This statement lacks any estimate of likelihood, timing, confidence, or other factors that would justify the prediction.

Forecasts, though usually used synonymously with predictions, are often accompanied by quantification and justification. Meteorologists generate forecasts: “There is an 80% chance of rain in your area tomorrow.”

Forecasts of the distant future are usually inaccurate because underling models fail to account for disruptive and nonlinear effects.

While predictions are generated by pundits and crystal ball-waving fortune tellers, forecasts are generated analytically based on models, assumptions, observations, and other data.

Anticipation is the act of expecting or foreseeing something, usually with presentiment or foreknowledge. While predictions postulate the outcome with stated or implied certainty and forecasts provide a mathematical estimate of a given outcome, anticipation refers to the broad ability to consider alternative outcomes. Anticipatory analysis combines forecasts, institutional knowledge (see Chapter 15), and other modeling approaches to generate a series of “what if ” scenarios. The important distinction between prediction/forecasting and anticipation is that anticipation identifies what may happen. Anticipatory analysis sometimes allows analysis and quantification of possible causes. Sections 16.2–16.6 in this chapter will describe modeling approaches and their advantages and disadvantages for anticipatory intelligence analysts.

16.2 Modeling for Anticipatory Intelligence

Anticipatory intelligence is based on models. Models, sometimes called “analytic models,” provide a simplified explanation of how some aspect of the real world works to yield insight. Models may be tacit or explicit. Tacit models are based on knowledge and experience.
They exist in the analyst’s head and are executed routinely during decision processes whether the analyst is aware of it or not. Explicit models are documented using a modeling language, diagram, description, or other relationship.

16.2.1 Models and Modeling

The most basic approach is to construct a model based on relevant context and use the model to understand or visualize a result. Another approach, comparative modeling (2), uses multiple models with the same contextual input data to provide a common output. This approach is useful for exploring alternative hypotheses or examining multiple perspectives to anticipate what may happen and why. A third approach called model aggregation combines multiple models to allow for complex interactions. The third approach has been applied to human socio-cultural behavior (HSCB) modeling and human domain analytics on multiple programs over the past 20 years with mixed results (see Section 16.6). Human activities and behaviors and their ensuing complexity, nonlinearity, and unpredictability, represent the most significant modeling challenge facing the community today.

16.2.2 Descriptive Versus Anticipatory/Predictive Models

Descriptive models present the salient features of data, relationships, or processes. They may be as simple as a diagram on a white board or as complex as a wiring diagram for a distributed computer networks. Analysts often use descriptive models to identify the key attributes of a process (or a series of activities).

16.3 Machine Learning, Data Mining, and Statistical Models

Machine learning traces its origins to the 17th century where German mathematician Leibnitz began postulating formal mathematical relationships to represent human logic. In the 19th century, George Boole developed a series of relations for deductive processes (now called Boolean logic). By the mid 20th century, English mathematician Alan Turing and John McCarthy of MIT began experimenting with “intelligent machines,” and the term “artificial intelligence” was coined. Machine learning is a subfield of artificial intelligence concerned with the development of algorithms, models, and techniques that allow machines to “learn.”

Natural intelligence, often thought of as the ability to reason, is a representation of logic, rules, and models. Humans are adept pattern-matchers. Memory is a type of historical cognitive recall. Although the exact mechanisms for “learning” in the human brain are not completely understood, in many cases it is possible to develop algorithms that mimic human thought and reasoning processes. Many machine-learning techniques including rule-based learning, case-based learning, and unsupervised learning are based on our understanding of these cognitive processes.

16.3.1 Rule-Based Learning

In rule-based learning, a series of known logical rules are encoded directly as an algorithm. This technique is best suited for directly translating a descriptive model into executable code.

Rule-based learning is the most straightforward way to encode knowledge into an executable model, but it is also the most brittle for obvious reasons. The model can only represent the phenomena for which rules have been encoded. This approach significantly reinforces traditional inductive-based analytic approaches and is highly susceptible to surprise.

16.3.2 Case-Based Learning

Another popular approach is called case-based learning. This technique presents positive and negative situations for which a model is learned. The learning process is called “training”; the model and the data used are referred to as the “training set.”

This learning approach is useful when the cases—and their corresponding observables, signatures, and proxies —can be identified a priori.

In the case of counterterrorism, many terrorists participate in normal activities and look like any other normal individual in that culture. The distinguishing characteristics that describe “a terrorist” are few, making it very difficult to train automatic detection and classification algorithms. Furthermore, when adversaries practice denial and deception, a common technique is to mimic the distinguishing characteristics of the negative examples so as to hide in the noise. This approach is also brittle because the model can only interpret cases for which it has positive and negative examples.

16.3.3 Unsupervised Learning

Another popular and widely employed approach is that of unsupervised learning where a model is generated based upon a data set with little or no “tuning” from a human operator. This technique is also sometimes called data mining because the algorithm literally identifies “nuggets of gold” in an otherwise unseemly heap of slag.

This approach is based on the premise that the computational elements themselves are very simple, like the neurons in the human brain. Complex behaviors arise from connections between the neurons that are modeled as an entangled web of relationships that represent signals and patterns.

While many of the formal cognitive processes for human sensemaking are not easily documented, sensemaking is the process by which humans constantly weigh evidence, match patterns, postulate outcomes, and infer between missing information. Although the term analysis is widely used to refer to the job of intelligence analysts, many sensemaking tasks are a form of synthesis: the process of integrating information together to enhance understanding.

In 2014, demonstrating “cognitive cooking” technology, a specially trained version of WATSON created “Bengali Butternut BBQ Sauce,” a delicious combination of butternut squash, white wine, dates, Thai chilies, tamarind, and more

Artificial intelligence, data mining, and statistically created models are generally good for describing known phenomenon and forecasting outcomes (calculated responses) within a trained model space but are unsuitable for extrapolating outside the training set. Models must be used where appropriate, and while computational techniques for automated sensemaking have been proposed, many contemporary methods are limited to the processing, evaluation, and subsequent reaction to increasingly complex rule sets.

16.4 Rule Sets and Event-Driven Architectures

An emerging software design paradigm, event-driven architectures, are “a methodology for designing and implementing applications and systems in which events transmit between loosely coupled software components and services”.

Events are defined as a change in state, which could represent a change in the state of an object, a data element, or an entire system. An event-driven architecture applies to distributed, loosely coupled systems that require asynchronous processing (the data arrives at different times and is needed at different times). Three types of event processing are typically considered:

• Simple event processing (SEP): The system response to a change in condition and a downstream action is initiated (e.g., when new data arrives in the database, process it to extract coordinates).

• Event stream processing (ESP): A stream of events is filtered to recognize notable events that match a filter and initiate an action (e.g., when this type of signature is detected, alert the commander).

Complex event processing (CEP): Predefined rule sets recognize a combination of simple events, occurring in different ways and different times, and cause a downstream action to occur.

16.4.1 Event Processing Engines

According to developer KEYW, JEMA is widely accepted across the intelligence community as “a visual analytic model authoring technology, which provides drag-and-drop authoring of multi-INT, multi-discipline analytics in an online collaborative space” [11]. Because JEMA automates data gathering, filtering, and processing, analysts shift the focus of their time from search to analysis.

Many companies use simple rule processing for anomaly detection, notably credit card companies whose fraud detections combine simple event processing and event stream detection. Alerts are triggered on anomalous behaviors.

16.4.2 Simple Event Processing: Geofencing, Watchboxes, and Tripwires

Another type of “simple” event processing highly relevant to spatiotemporal analysis is a technique known as geofencing.

A dynamic area of interest is a watchbox that moves with an object. In the Ebola tracking example, the dynamic areas of interest are centered on each tagged ship with a user-defined radius. This allows the user to identify when two ships come within close proximity or when the ship passes near a geographic feature like a shoreline or port, providing warning of potential docking activities.

To facilitate the monitoring of thousands of objects, rules can be visualized on a watchboard that uses colors, shapes, and other indicators to highlight rule activation and other triggers. A unique feature of LUX is the timeline view, which provides an interactive visualization of patterns across individual rules or sets of rules as shown in Figure 16.6 and how rules and triggers change over time.

A dynamic area of interest is a watchbox that moves with an object. In the Ebola tracking example, the dynamic areas of interest are centered on each tagged ship with a user-defined radius. This allows the user to identify when two ships come within close proximity or when the ship passes near a geographic feature like a shoreline or port, providing warning of potential docking activities.

To facilitate the monitoring of thousands of objects, rules can be visualized on a watchboard that uses colors, shapes, and other indicators to highlight rule activation and other triggers. A unique feature of LUX is the timeline view, which provides an interactive visualization of patterns across individual rules or sets of rules as shown in Figure 16.6 and how rules and triggers change over time.

16.4.4 Tipping and Cueing

The original USD(I) definition for ABI referred to “analysis and subsequent collection” and many models for ABI describe the need for “nonlinear TCPED” where the intelligence cycle is dynamic to respond to changing intelligence needs. This desire has often been restated as the need for automated collection in response to detected activities, or “automated tipping and cueing.”

Although the terms are usually used synonymously, a tip is the generation of an actionable report or notification of an event of interest. When tips are sent to human operators/analysts, they are usually called alerts. A cue is a more related and specific message sent to a collection system as the result of a tip. Automated tipping and cueing systems rely on tip/cue rules that map generated tips to the subsequent collection that requires cueing.

Numerous community leaders have highlighted the importance of tipping and cueing to reduce operational timelines and optimize multi-INT collection.

Although many intelligence community programs conflate “ABI” with tipping and cueing, the latter is an inductive process that is more appropriately paired with monitoring and warning for known signatures after the ABI methods have been used to identify new behaviors from an otherwise innocuous set of data. In the case of modeling, remember that models only respond to the rules for which they are programmed; therefore tipping and cueing solutions may improve efficiency but may inhibit discovery by reinforcing the need to monitor known places for known signatures instead of seeking the unknown unknowns.

16.5 Exploratory Models

Data mining and statistical learning approaches create models of behaviors and phenomenon, but how are these models executed to gain insight. Exploratory modeling is a modeling technique used to gain a broad understanding of a problem domain, key drivers, and uncertainties before going into details

16.5.1 Basic Exploratory Modeling Techniques

There are many techniques for exploratory modeling. Some of the most popular include Bayes nets, Markov chains, Petri nets, and discrete event simulation.

Discrete event simulation (DES) is another state transition and process modeling technique that models a system as a series of discrete events in time. In contrast to continuously executing simulations (see agent-based modeling and system dynamics in Sections 16.5.3 and 16.5.4), the system state is determined by activities that happen over user-defined time slices. Because events can cross multiple time slices, every time slice does not have to be simulated.

16.5.2 Advanced Exploratory Modeling Techniques

There is also a class of modeling techniques for studying emergent behaviors and modeling of complex systems with a focus on discovery emerged due to shortfalls in other modeling techniques.

16.5.3 ABM

ABM is an approach that develops complex behaviors by aggregating the actions and interactions of relatively simple “agents.” According to ABM pioneer Andrew Ilachinski, “agent-based simulations of complex adaptive systems are predicated on the idea that the global behavior of a complex system derives entirely from the low-level interactions among its constituent agents” [23]. Human operators define the goals of agents. In simulation, agents make decisions to optimize their goals based on perceptions of the environment. The dynamics of multiple, interacting agents often lead to interesting and complicated emergent behaviors.

16.5.4 System Dynamics Model

System dynamics is another popular approach to complex systems modeling that defines relationships between variables in terms of stocks and flows. Developed by MIT professor Jay Forrester in the 1950s, system dynamics was concerned with studying complexities in industrial and business processes.

By the early 2000s, system dynamics emerged as a popular technique to model the human domain and its related complexities. Between 2007 and 2009, researchers from MIT and other firms worked with IARPA on the Pro-Active Intelligence (PAINT) program “to develop computational social science models to study and understand the dynamics of complex intelligence targets for nefarious activity” [26]. Researchers used system dynamics to examine possible drivers of nefarious technology development (e.g., weapons of mass destruction) and critical pathways and flows including natural resources, precursor processes, and intellectual talent.

Another aspect of the PAINT program was the design of probes. Since many of the indicators of complex processes are not directly observable, PAINT examined input activities like sanctions that may prompt the adversary to do something that is observable. This application of the system dynamics modeling technique is appropriate for anticipatory analytics because it allows analysts to test multiple hypotheses rapidly in a surrogate environment. In one of the examples cited by MIT researchers, analysts examined a probe targeted at human resources where the simulators examined potential impacts of hiring away key personnel resources with specialized skills. This type of interactive, anticipatory analysis lets teams of analysts examine potential impacts of different courses of action.
System dynamics models have the additional property that the descriptive model of the system also serves as the executable model when time constants and influence factors are added to the representation. The technique suffers from several shortcomings including the difficulty in establishing transition coefficients, the impossibility of model validation, and the inability to reliably account for known and unknown external influences on each factor.

16.6 Model Aggregation

Analysts can improve the fidelity of anticipatory modeling by combining the results from multiple models. One framework for composing multiple models is the multiple information model synthesis architecture (MIMOSA), developed by Information Innovators. MIMOSA “aided one intelligence center to increase their target detection rate by 500% using just 30% of the resources previously tasked with detection freeing up personnel to focus more on analysis” [29]. MIMOSA uses target sets (positive examples of target geospatial regions) to calibrate models for geospatial search criteria like proximity to geographic features, foundation GEOINT, and other spatial relationships. Merging multiple models, the software aggregates the salient features of each model to reduce false alarm rate and improve the predictive power of the combined model.

An approach for multiresolution modeling of sociocultural dynamics was developed by DARPA for the COMPOEX program in 2007. COMPOEX provided multiple types of agent-based, system dynamics, and other models in a variable resolution framework that allowed military planners to swap different models to test multiple courses of action across a range of problems. A summary of the complex modeling environment is shown in Figure 16.10. COMPOEX includes modeling paradigms such as concept maps, social networks, influence diagrams, differential equations, causal models, Bayes networks, Petri nets, dynamic system models, event-based simulation, and agent-based models [31]. Another feature of COMPOEX was a graphical scenario planning tool that allowed analysts to postulate possible courses of action, as shown in Figure 16.11.

Each of the courses of action in Figure 16.11 was linked to one or more of the models across the sociocultural behavior analysis hierarchy, abstracting the complexity of models and their interactions away from analysts, planners, and decision makers. The tool forced models at various resolutions to interact (Figure 16.10) to stimulate emergent dynamics so planners could explore plausible alternatives and resultant courses of action.

Objects can usually be modeled using physics-based or process models. However, an important tenet of ABI is that these objects are operated by someone (who). Knowing something about the “who” provides important insights into the anticipated behavior of those objects.

16.7 The Wisdom of Crowds

Most of the anticipatory analytic techniques in this chapter refer to analytic, algorithmic, or simulation-based models that exist as computational processes; however, it is important to mention a final and increasingly popular type of modeling approach based on human input and subjective judgment.

James Surowiecki, author of The Wisdom of Crowds, popularized the concept of information aggregation that surprisingly leads to better decisions than those made by any single member of the group. It offers anecdotes to illustrate the argument, which essentially acts as a counterpoint to the maligned concept of “groupthink.” Surowiecki differentiates crowd wisdom from group think by identifying four criteria for a “wise crowd” [33]:

• Diversity of opinion: Each person should have private information even if it’s just an eccentric interpretation of known facts.
• Independence: People’s opinions aren’t determined by the opinions of those around them.
• Decentralization: People are able to specialize and draw on local knowledge
• Aggregation: Some mechanism exists for turning private judgments into a collective decision

A related DARPA program called FutureMAP was canceled in 2003 amidst congressional criticism regarding “terrorism betting parlors”; however, the innovative idea was reviewed in depth by Yeh in Studies in Intelligence in 2006 [36]. Yeh found that prediction markets could be used to quantify uncertainty and eliminate ambiguity around certain types of judgments. George Mason University launched IARPA-funded SciCast, which forecasts scientific and technical advancements.

16.8 Shortcomings of Model-Based Anticipatory Analytics

By now, you may be experiencing frustration that none of the modeling techniques in this chapter are the silver bullet for all anticipatory analytic problems. The challenges and shortcomings for anticipatory modeling are voluminous.

The major shortcoming of all models is that they can’t do what they aren’t told. Rule-based models are limited to user-defined rules, and statistically generated models are limited to the provided data. As we have noted on multiple occasions, intelligence data is undersampled, incomplete, intermittent, error-prone, cluttered, and deceptive. All of these are ill-suited for turnkey modeling.

A combination of many types of modeling approaches is needed to perform accurate, justifiable, broad based anticipatory analysis. Each of these needs validation, but model validation, especially in the field of intelligence, is a major challenge. We seldom have “truth” data. The intelligence problem and its underlying assumptions are constantly evolving as are attempts to solve it, a primary criterion for what Rittel and Weber call “wicked problems” [39].

Handcrafting models is slow, and a high level of skill is required to use many modeling tools. Furthermore, most of these tools do not allow easy sharing across other tools or across modeling approaches, complicating the ability to share and compare models. This challenge is exacerbated by the distributed nature of knowledge in the intelligence community.

When models exist, analysts depend heavily on “the model.” Sometimes it has been right in the past. Perhaps it was created by a legendary peer. Maybe there’s no suitable alternative. Overdependence on models and extrapolation of models into regions where they have not been validated leads to improper conclusions.

A final note: weather forecasting relies on physics-based models with thousands of real-time data feeds, decades of forensic data, ground truth, validated physics-based models, one-of-a-kind supercomputers, and a highly trained cadre of scientists, networked to share information and collaborate. It is perhaps the most modeled problem in the world. Yet weather “predictions” are often wrong, or at minimum imprecise. What hope is there for predicting human behaviors based on a few spurious observations?

16.9 Modeling in ABI

In the early days of ABI, analysts in Iraq and Afghanistan lacked the tools to formally model activities. As analysts assembled data in an area, they developed a tacit mental model of what was normal. Their geodatabases representing a pattern of life constituted a type of model of what was known. The gaps in those databases represented the unknown. Their internal rules for how to correlate data, separating the possibly relevant from the certainly irrelevant composed part of a workflow model as did their specific method for data conditioning and georeferencing.

However, relying entirely on human analysts to understand increasingly complex problem sets also presents challenges. Studies have shown that experts (including intelligence analysts) are subject to biases due to a number of factors like perception, evaluation, omission, availability, anchoring, groupthink, and others.

Analytic models that treat facts and relationships explicitly provide a counterbalance to inherent biases in decision-making. Models can also quickly process large amounts of data and multiple scenarios without getting tired, bored, or discounting information.

Current efforts to scale ABI across the community focus heavily on activity, process, and object modeling as this standardization is believed to enhance information sharing and collaboration. Algorithmic approaches like JEMA, MIMOSA, PAINT, and LUX have been introduced to worldwide users.

16.10 Summary

Models provide a mechanism for integrating information and exploring alternatives, improving an analyst’s ability to discover the unknown. However, if models can’t be validated, executed on sparse data, or trusted to solve intelligence problems, can any of them be trusted? If “all models are wrong,” in the high-stakes business of intelligence analysis, are any of them useful?

Model creation requires a multidisciplinary, multifaceted, multi-intelligence approach to data management, analysis, visualization, statistics, correlation, and knowledge management. The best model builders and analysts discover that it’s not the model itself that enables anticipation. The exercise in data gathering, hypothesis testing, relationship construction, code generation, assumption definition, and exploration trained the analyst. To build a good model, the analyst had to consider multiple ways something might happen—to consider the probability and consequence of different outcomes. The data landscape, adversary courses of action, complex relationships, and possible causes are all discovered in the act of developing a valid model. Surprisingly, when many analysts set out to create a model they end by realizing they became one.

17

ABI in Policing

Patrick Biltgen and Sarah Hank

Law enforcement and policing shares many common techniques with intelligence analysis. Since 9/11, police departments have implemented a number of tools and methods from the discipline of intelligence to enhance the depth and breadth of analysis.

17.1 The Future of Policing

Although precise prediction of future events is impossible, there is a growing movement among police departments worldwide to leverage the power of spatiotemporal analytics and persistent surveillance to resolve entities committing crimes, understand patterns and trends, adapt to changing criminal tactics, and better allocate resources to the areas of greatest need. This chapter describes the integration of intelligence and policing— popularly termed “intelligence-led policing”—and its evolution over the past 35 years.

17.2 Intelligence-Led Policing: An Introduction

The term “intelligence-led policing” traces its origins to the 1980s at the Kent Constabulatory in Great Britain. Faced with a sharp increase in property crimes and vehicle thefts, the department struggled with how to allocate officers amidst declining budgets [2, p. 144]. The department developed a two-pronged approach to address this constraint. First, it freed up resources so detectives had more time for analysis by prioritizing service calls to the most serious offenses and referring lower priority calls to other agencies. Second, through data analysis it discovered that “small numbers of chronic offenders were responsible for many incidents and that patterns also include repeat victims and target locations”.

The focus of analysis and problem solving is to analyze and understand the influencers of crime using techniques like statistical analysis, crime mapping, and network analysis. Police presence is optimized to deter and control these influencers while simultaneously collecting additional information to enhance analysis and problem solving. A technique for optimizing police presence is described in Section 17.5.

Intelligence-led policing applies analysis and problem solving techniques to optimize resource allocation in the form of focused presence and patrols. Accurate dissemination of intelligence, continuous improvement, and focused synchronized deployment against crime are other key elements of the method.

17.2.1 Statistical Analysis and CompStat

The concept of ILP was implemented in the New York City police department in the 1980s by police commissioner William Bratton and Jack Maple. Using a computational statistics approach called CompStat, “crime statistics are collected, computerized, mapped and disseminated quickly” [5]. Wall-sized “charts of the future” mapped every element of the New York transit system. Crimes were mapped against the spatial nodes and trends were examined.

Though its methods are controversial, CompStat is widely credited with a significant reduction in crime in New York. The method has since been implemented at other major cities in the United States with a similar result, and the methods and techniques for statistical analysis of crimes is standard in criminology curricula.

17.2.2 Routine Activities Theory

A central tenet of ILP is based on Cohen and Felson’s routine activities theory, which is the general principle that human activities tend to follow predictable patterns in time and space. In the case of crime, the location for these events is defined by the influencers of crime(Figure 17.2). Koper provides exposition of these influencers: “crime does not occur randomly but rather is produced by the convergence in time and space of motivated offenders, suitable targets, and the absence of capable guardians.”

17.3 Crime Mapping

Crime mapping is a geospatial analysis technique that geolocates and categorizes crimes to detect hot spots, understand the underlying trends and patterns, and develop courses of action. Crime hot spots are a type of spatial anomaly that may be characterized at the address, block, block cluster, ward, county, geographic region, or state level—the precision of geolocation and the aggregation depends on the area of interest and the question being asked.

17.3.1 Standardized Reporting Enables Crime Mapping

In 1930, Congress enacted Title 28, Section 534 of the U.S. code, authorizing the Attorney General and subsequently the FBI to standardize and gather crime information [6]. The FBI implemented the Uniform Crime Reporting Handbook, standardizing and normalizing the methods, procedures, and data formats for documenting criminal activity. This type of data conditioning enables information sharing and pattern analysis by ensuring consistent reporting standards across jurisdictions.

17.3.2 Spatial and Temporal Analysis of Patterns

Visualizing each observation as a dot at the city or regional level is rarely informative. For example, in the map in Figure 17.3, discerning a meaningful trend requires extensive data filtering by time of day, type of crime, and other criteria. One technique that is useful to understand trends and patterns is the aggregation of individual crimes into spatial regions.

Mapping aggregated crime data by census tract reveals that the rate of violent crime does not necessarily relate to quadrants, but rather to natural geographic barriers such as parks and rivers. Other geographic markers like landmarks, streets, and historical places may also act as historical anchors for citizens’ perspectives on crime.

Another advantage of aggregating data by area using a GIS is the ability to visualize change over time.

17.4 Unraveling the Network

Understanding hot spots and localizing the places where crimes tend to occur is only part of the story, and reducing crimes around hot spots is only a treatment of the symptoms rather than the cause of the problem. Crime mapping and intelligence-led policing focus on the ABI principles of collecting, characterizing, and locating activities and transactions. Unfortunately, these techniques alone are insufficient to provide entity resolution, identify and locate the actors and entities conducting activities and transactions, and identify and locate networks of actors. These techniques are generally a reactive, sustaining approach to managing crime. The next level of analysis gets to the root cause of crime to go after the heart of the network to resolve entities, understand their relationships, and proactively attack the seams of the network.

The Los Angeles Police Department’s Real-Time Analysis and Critical Response (RACR) division is a state-of-the-art, network enabled analysis cell that uses big data to solve crimes. Police vehicles equipped with roof-mounted license plate readers provide roving wide-area persistent surveillance by vacuuming up geotagged vehicle location data as they patrol the streets.

One of the tools used by analysts in the RACR is made by Palo Alto-based Palantir Technologies. Named after the all-seeing stones in J. R. R. Tolkien’s Lord of the Rings, Palantir is a data fusion platform that provides a clean, coherent abstraction on top of different types of data that all describe the same real world problem”. Palantir enables “data integration, search and discovery, knowledge management, secure collaboration, and algorithmic analysis across a wide variety of data sources”. Using advanced artificial intelligence algorithms—coupled with an easy-to-use graphical interface—Palantir helps trained investigators identify connections between disparate databases to rapidly discover links between people.
Before Palantir was implemented, analysts missed these connections because field interview (FI) data, department of motor vehicles data, and automated license plate reader data was all held in separate databases. The department also lacked situational awareness about where their patrol cars were and how they were responding to requests for help. Palantir integrated analytic capabilities like “geospatial search, trend analysis, link charts, timelines, and histograms” to help officers find, visualize, and share data in near-real time.

17.5 Predictive Policing

Techniques like crime mapping, intelligence-led policing, and network analysis, when used together, enable all five principles of ABI and move toward the Minority Report nirvana described at the beginning of the chapter. This approach has been popularized as “predictive policing.”

Although some critics have questioned the validity of PredPol’s predictions, “during a four-month trial in Kent [UK], 8.5% of all street crime occurred within PredPol’s pink boxes…predictions from police analysts scored only 5%”

18

ABI and the D.C. Beltway Sniper

18.5 Data Neutrality

Any piece of evidence may solve a crime. This is a well-known maxim within criminal cases and is another way of stating the ABI pillar of data neutrality. Investigators rarely judge that one piece of evidence is more important to a case than another with equal pedigree. Evidence is evidence. Coupled with the concept of data neutrality, crime scene processing is essentially a processes of incidental collection. When a crime scene is processed, investigators might know what they are looking for (a spent casing from a rifle) but may discover objects they were not looking for (an extortion note from a killer). Crime scene specialists enter a crime scene with an open mind and collect everything available. They generally make no value judgment on the findings during collection nor do they discard any evidence, for who knows what piece of detritus might be fundamental to building a case.

The lesson learned here, which is identical to the lesson learned within the ABI community, is to collect and keep everything; one never knows if and when it will be important.

18.6 Summary

The horrific events that comprise the D.C. snipers serial killing spree makes an illustrative case study for the application of the ABI pillars. By examining the sequence of events and the analysis that was performed, the following conclusions can be drawn. First, georeferencing all data would have improved understanding of the data and provided context. Unfortunately, the means to do that did not exist at the time. Second, integrating before exploitation might have prevented law enforcement from erroneously tracking and stopping white cargo vans. Again the tools to do this integration do not appear to have existed in 2002.

Interestingly, sequence neutrality and data neutrality were applied to great effect. Once a caller tied two separate crimes together, law enforcement was able to use all the information collected in the past to solve the current crime.

19

Analyzing Transactions in a Network

William Raetz

One of the key differences in the shift from target-based intelligence to ABI is that targets of interest become the output of deductive, geospatial, and relational analysis of activities and transactions. As RAND’s Gregory Treverton noted in 2011, imagery analysts “used to look for things and know what we were looking for. If we saw a Soviet T-72 tank, we knew we’d find a number of its brethren nearby. Now…we’re looking for activities or transactions. And we don’t know what we’re looking for” [1, p. ix]. This chapter demonstrates deductive and relational analysis using simulated activities and transactions, providing a real-world application for entity resolution and the discovery of unknowns.

19.1 Analyzing Transactions with Graph Analytics

Graph analytics—derived from the discrete mathematical discipline of graph theory—is a technique for examining the relationship between data using pairwise relationships. Numerous algorithms and visualization tools for graph analytics have proliferated over the past 15 years. This example demonstrates how simple geospatial and relational analysis tools can be used to understand complex patterns of movement—the activities and transactions conducted by entities—over a city-sized area. This scenario involves an ABI analyst looking for a small “red network” of terrorists hiding among a civilian population.
Hidden within the normal patterns of the 4,623 entities is a malicious network. The purpose of this exercise is to analyze the data using ABI principles to unravel this network: to discover the signal hidden in the noise of everyday life.

The concepts of “signal” and “noise,” which have their origin in signal processing and electrical engineering, are central to the analysis of nefarious actors that operate in the open but blend into the background. Signal is the information relevant to an analyst contained in the data; noise is everything else. For instance, a “red,” or target, network’s signal might consist of activity unique to achieving their aims; unusual purchases, a break in routine, or gatherings at unusual times of day are all possible examples of signal.

Criminal and terrorist networks have become adept at masking their signal—the “abnormal” activity necessary to achieve their aims—in the noise of the general population’s activity. To increase the signal-to-noise ratio (SNR), an analyst must determine inductively or deductively what types of activities constitute the signal. In a dynamic, noisy, densely populated environment, this is difficult unless the analyst can narrow the search space by choosing a relevant area of interest, choosing a time period when enemy activity is likely to be greater, or beginning with known watch listed entities as the seeds for geochaining or geospatial network analysis.

19.2 Discerning the Anomalous

Separating out the signal from the background noise is as much art as science. As an analyst becomes more familiar with a population or area, “normal,” or background, behavior becomes inherent through tacit model building and hypothesis testing.

The goals and structure of the target group define abnormal activity. For example, the activity required to build and deploy an improvised explosive device (IED) present in the example data set will be very different from money laundering. A network whose aim is to build and deploy an IED may consist of bomb makers, procurers, security, and leadership within a small geographic area. Knowing the general goals and structure of the target group will help identify the types of activities that constitute signal.

Nondiscrete locations where many people meet will have a more significant activity signature. The analyst will also have to consider how entities move between these locations and discrete locations that have a weaker signal but contribute to a greater probability of resolving a unique entity of interest. An abnormal pattern of activity around these discrete locations is the initial signal the analyst is looking for.
At this point, the analyst has a hypothesis, a general plan based on his knowledge of the key types of locations a terrorist network requires. He will search for locations that look like safe house and warehouse locations based on events and transactions. When the field has been narrowed to a reasonable set of possible discrete locations, he will initiate forensic backtracking of transactions to identify additional locations and compile a rough list of red network members from the participating entities. This is an implementation of the “where-who” concept from Chapter 5.

19.3 Becoming Familiar with the Data Set

After receiving the data and the intelligence goal, the analyst’s first step is to familiarize himself with the data. This will help inform what processing and analytic tasks are possible; a sparse data set might require more sophistication, while a very large one may require additional processing power. In this case, because the available data is synthetic, the data is presented in three clean comma-separated value (.csv) files (Table 19.1). Analysts typically receive multiple files that may come from different sources or may be collected/created at different times.

It is important to note that the activity patterns for a location represent a pattern-of-life element for the entities in that location and for participating entities. The pattern-of-life element provides some hint to the norms in the city. It may allow the analyst to classify a building based on the times and types of activities and transactions (Section 19.4.1) and to identify locations that deviate from these cultural norms. Deducing why locations deviate from the norm—and whether these deviations are significant—is part of the analytic art of separating signal from background noise.

19.4.1 Method: Location Classification

One of the most technically complex methods of finding suspicious locations is to interpret these activity patterns through a series of rules to determine which are “typical” of a certain location type. For instance, if a location exhibits a very typical workplace pattern, as evidenced by its distinctive double peak, it can be eliminated from consideration, based on the assumption that the terrorist network prefers to avoid conducting activities at the busiest times and locations.

Because there is a distinctive and statistically significant difference between discrete and nondiscrete locations using the average time distance technique, the analyst can use the average time between activities to identify probable home locations. He calculates the average time between activities for every available uncategorized location and treats all the locations with an average greater than three as single-family home locations.

19.4.2 Method: Average Time Distance

The method outlined in Section 19.4.1 is an accurate but cautious way of using activity patterns to classify location types. In order to get a different perspective on these locations, instead of looking at the peaks of activity patterns, the analyst will next look at the average time between activities.

19.4.3 Method: Activity Volume

The first steps of the analysis process filtered out busy workplaces (nondiscrete locations) and single-family homes (discrete locations), leaving the analyst with a subset of locations that represent unconventional workplaces and other locations that may function as safe houses or warehouses.

The analyst uses an activity volume filter to remove all of the remaining locations that have many more activities than expected. He also removes all locations with no activities, assuming the red network used a location shortly before its attack.

19.4.4 Activity Tracing

The analyst’s next step is to choose a few of the best candidates for additional collection. If 109 is too many locations to examine in the time required by the customer, he can create a rough prioritization by making a final assumption about the behavior of the red network by assuming they have traveled directly between at least two of their locations.

19.5 Analyzing High-Priority Locations with a Graph

To get a better understanding of how these locations are related, and who may be involved, the analyst creates a network graph of locations, using tracks to infer relationships between locations. The network graph for these locations is presented in Figure 19.7.

19.6 Validation

At this point, the analyst has taken a blank slate and turned a hypothesis into a short list of names and locations.

19.7 Summary

This example demonstrates deductive methods for activity and transaction analysis that reduce the number of possible locations to a much smaller subset using scripting, hypotheses, analyst-derived rules, and graph analysis. To get started, the analyst had to wrestle with the data set to become acquainted with the data and the patterns of life for the area of interest. He formed a series of assumptions about the behavior of the population and tested these by analyzing graphs of activity sliced different ways. Then the analyst implemented a series of filters to reduce the pool of possible locations. Focusing on locations and then resolving entities that participated in activities and transactions—georeferencing to discover—was the only way to triage a very large data set with millions of track points. Because locations have a larger activity signature than individuals in the data set, it is easier to develop and test hypotheses on the activities and transactions around a location and then use this information as a tip for entity-focused graph analytics.

Through a combination of these filters the analyst removed 5,403 out of 5,445 locations. This allowed for highly targeted analysis (and in the real world, subsequent collection). In the finale of the example, two interesting entities were identified based on their relationship to the suspicious locations. In addition to surveilling these locations, these entities and their proxies could be targeted for collection and analysis.

21

Visual Analytics for Pattern-of-Life Analysis

This chapter integrates concepts for visual analytics with the basic principles of georeference to discover to analyze the pattern-of-life of entities based on check-in records from a social network.

It presents several examples of complex visualizations used to graphically understand entity motion and relationships across named locations in Washington, D.C., and the surrounding metro area. The purpose of the exercise is to discover entities with similar patterns of life and cotraveling motion patterns—possibly related entities. The chapter also examines scripting to identify intersecting entities using the R statistical language.

21.1 Applying Visual Analytics to Pattern-of-Life Analysis

Visual analytic techniques provide a mechanism for correlating data and discovering patterns.

21.1.3 Identification of Cotravelers/Pairs in Social Network Data

Visual analytics can be used to identify cotravelers, albeit with great difficulty.

Further drill down (Figure 21.5) reveals 11 simultaneous check-ins, including one three-party simultaneous check-in at a single popular location.

The next logical question—an application of the where-who-where concept discussed in Chapter 5—is “do these three individuals regularly interact?”

21.2 Discovering Paired Entities in a Large Data Set

Visual analytics is a powerful, but often laborious and serendipitous approach to exploring data sets. An alternative approach is to write code that seeks mathematical relations in the data. Often, the best approach is to combine the techniques.

It is very difficult to analyze data with statistical programming languages if the analysts/data scientists do not know what they are looking for. Visual analytic exploration of the data is a good first step to establish hypotheses, rules, and relations that can then be coded and processed in bulk for the full dataset.

Integrating open-source data, the geolocations can be correlated with named locations like the National Zoo and the Verizon Center. Open-source data also tells us that the event at the Verizon Center was a basketball game between the Utah Jazz and Washington Wizards. The pair cotravel only for a single day over the entire data set. We might conclude that this entity is an out-of-town guest. That hypothesis can be tested by returning to the 6.4-million-point worldwide dataset.

User 129395 checked in 122 times and only in Stafford and Alexandria, Virginia, and the District of Columbia. During the day, his or her check-ins are in Alexandria, near Duke St. and Telegraph Rd. (work). In the evenings, he or she can be found in Stafford (home). This is an example of identifying geospatial locations based on the time of day and the pattern-of-life elements present in this self-reported data set.
Note that another user, 190, also checks in at the National Zoo at the same time as the cotraveling pair. We do not know if this entity was cotraveling the entire time and decided to check in at only a single location or if this is an unrelated entity that happened to check in near the cotraveling pair while all three of them were standing next to the lions and tigers exhibit. The full data set finds user 190 all over the world, but his or her pattern of life places him or her most frequently in Denver, Colorado.

And what about the other frequent cotraveler, 37398? The pair coincidentally checked in 10 times over a four-month period, between the hours of 14:00 and 18:00 and 21:00 and 23:59 at the Natural History Museum, Metro Center, the National Gallery of Art, and various shopping centers and restaurants around Stafford, Virginia. We might conclude that this is a family member, child, friend, or significant other.

21.3 Summary

This example demonstrates how a combination of spatial analysis, visual analytics, statistical filtering, and scripting can be combined to understand patterns of life in real “big data” sets; however, conditioning, ingesting, and filtering this data to create a single example took more than 12 hours.

Because this data requires voluntary check-ins at registered locations, it is an example of the sparse data typical of intelligence problems. If the data consisted of beaconed location data from GPS-enabled smartphones, it would be possible to identify multiple overlapping locations.

22

Multi-INT Spatiotemporal Analysis

A 2010 study by OUSD(I) identified “an information domain to combine persistent surveillance data with other INTs with a ubiquitous layer of GEOINT” as one of 16 technology gaps for ABI and human domain analytics [1]. This chapter describes a generic multi-INT spatial, temporal, and relational analysis framework widely adopted by commercial tool vendors to provide interactive, dynamic data integration and analysis to support ABI techniques.

22.1 Overview

ABI analysis tools are increasingly instantiated using web-based, thin client interfaces. Open-source web mapping and advanced analytic code libraries have proliferated.

22.2 Human Interface Basics

A key feature for spatiotemporal-relational analysis tools is the interlinking of multiple views, allowing analysts to quickly understand how data elements are located in time and space, and in relation to other data.

22.2.1 Map View

An “information domain for combining persistent surveillance data on a ubiquitous foundation of GEOINT” drives the central feature of the analysis environment to the map. Spatial searches are performed using a bounding box (1). Events are represented as geolocated dots or symbols (2). Short text descriptions annotate events. Tracks—a type of transaction—are represented as lines with a green dot for starts and a red dot or X for stops (3). Depending on the nature of the key intelligence question (KIQ) or request for information (RFI), the analyst can choose to discover and display full tracks or only starts and stops. Clicking on any event or track point in the map brings up metadata describing the data element. Information like speed and heading accompany track points. Other metadata related to the collecting sensor may be appended to other events and transactions collected from unique sensors. Uncertainty around event position may be represented by a 95% confidence ellipse at the time of collection (4).

22.2.2 Timeline View

Temporal analysis requires a timeline that depicts spatial events and transactions as they occur in time. Many geospatial tools—originally designed to make a static map that integrates layered data at a point in time—have added timelines to allow animation of data or the layering of temporal data upon foundational GEOINT. Most tools instantiate the timeline below the spatial view (Google Earth uses timeline slider in the upper left corner of the window).

22.2.3 Relational View

Relational views are popular in counterfraud and social network analysis tools like Detica NetReveal and Palantir. By integrating a relational view or a graph with the spatiotemporal analysis environment, it is possible to link different spatial locations, events, and transactions by relational properties.

The grouping of multisource events and transactions is an activity set (Figure 22.2). The activity set acts as a “shoebox” for sequence neutral analysis. In the course of examining data in time and space, an analyst identifies data that appears to be related, but does not know the nature of the relationship. Drawing a box around the data elements, he or she can group them and create an activity set to save them for later analysis, sharing, or linking with other activity sets.

By linking activity sets, the analyst can describe a filtered set of spatial and temporal events as a series of related activities. Typically, linked activity sets form the canvas for information sharing across multiple analysts working the same problem set. The relational view leverages graphs and may also instantiate semantic technologies like the RDF to provide context to relationships.

22.3 Analytic Concepts of Operations

This section describes some of the basic analysis principles widely used in spatiotemporal analysis tools.

22.3.1 Discovery and Filtering

In the traditional, target-based intelligence cycle, analysts would enter a target identifier to pull back all information about the target, exploit that information, report on the target, and then go on to the next target. In ABI analysis, the targets are unknown at the onset of analysis and must be discovered through deductive analytics, reasoning, pattern analysis, and information correlation.

Searching for data may result in querying many distributed databases. Results are presented to the user as map/timeline renderings. Analysts typically select a smaller time slice and animate through the data to exploit transactions or attempt to recognize patterns. This process is called data triage. Instead of requesting information through a precisely phrased query, ABI analytics prefers to bring all available data to analysts’ desktop so they can determine if the information has value. This process implements the ABI principles of data neutrality and integration before exploitation simultaneously. It also places a large burden on query and visualization systems—most of the data returned by the query will be discarded as irrelevant. However, filtering out data a priori risks losing valuable correlatable information in the area of interest.s

22.3.2 Forensic Backtracking

Analysts use the framework for forensic backtracking, an embodiment of the sequence neutral paradigm of ABI. PV Labs describes a system that “indexes data in real time, permitting the data to be used in various exploitation solutions… for backtracking and identifying nodes of other multi-INT sources”.

Exelis also offers a solution for “activity-based intelligence with forensic capabilities establishing trends and interconnected patterns of life including social interactions, origins of travel and destinations” [4].
Key events act as tips to analysts or the start point for forward or forensic analysis of related data.

22.3.3 Watchboxes and Alerts

A geofence is a virtual perimeter used to trigger actions based on geospatial events [6]. Metzger describes how this concept was used by GMTI analysts to provide real-time indication and warning of vehicle motion.

Top analysts continually practice discovery and deductive filtering to update watchboxes with new hypotheses, triggers, and thresholds.

Alerts may result in subsequent analysis or collection. For example, alerts may be sent to collection management authorities with instructions to collect on the area of interest with particular capabilities when events and transactions matching certain filters are detected. When alerts go to collection systems, they are typically referred to as “tips” or “cues.”

22.3.4 Track Linking

As described in Chapter 12, automated track extraction algorithms seldom produce complete tracks from an object’s origin to its destination. Various confounding factors like shadows and obstructions cause track breaks. A common feature in analytic environments is the ability to manually link tracklets based on metadata.

22.4 Advanced Analytics

Another key feature of many ABI analysis tools is the implementation of “advanced analytics”—automated algorithmic processes that automate routine functions or synthesize large data sets into enriched visualizations.

Density maps allow pattern analysis across large spatial areas. Also called “heat maps,” these visualizations sum event and transaction data to create a raster layer with hot spots in areas with large numbers of activities. Data aggregation is defined over a certain time interval. For example, by setting a weekly time threshold and creating multiple density maps, analysts can quickly understand how patterns of activity change from week to week.

Density maps allow analysts to quickly get a sense for where (and when) activities tend to occur. This information is used in different ways depending on the analysis needs. If events are very rare like missile launches or explosions, density maps focus the analyst’s attention to these key events.

In the case of vehicle movement (tracks), density maps identify where most traffic tends to occur. This essentially identifies nondiscrete locations and may serve as a contraindicator for interesting nodes at which to exploit patterns of life. For example, in an urban environment, density maps highlight major shopping centers and crowded intersections. In a remote environment, density maps of movement data may tip analysts to interesting locations.

Other algorithms process track data to find intersections and overlaps. For example, movers with similar speed and heading in close proximity appear as cotravelers. When they are in a line, they may be considered a convoy. When two movers come within a certain proximity for a certain time, this can be characterized as a “meeting.” Mathematical relations with different time and space thresholds identify particular behaviors or compound events.

22.5 Information Sharing and Data Export

Many frameworks feature geoannotations to enhance spatial storytelling. These geospatially and temporally referenced “callout boxes” highlight key events and contain analyst-entered metadata describing a complex series of events and transactions.

Not all analysts operate within an ABI analysis tool but could benefit from the output of ABI analysis. Tracks, image chips, event markers, annotations, and other data in activity sets can be exported in KML, the standard format for Google Earth and many spatial visualization tools. KML files with temporal metadata enable the time slider within Google Earth, allowing animation and playback of the spatial story.

22.6 Summary

Over the past 10 years, several tools have emerged that use the same common core features to aid analysts in understanding large amounts of spatial and temporal data. At the 2014 USGIF GEOINT Symposium, tool vendors including BAE Systems [13, 14], Northrop Grumman [15], General Dynamics [16], Analytical Graphics [17, 18], DigitalGlobe, and Leidos [19] showcased advanced analytics tools similar to the above [20]. Georeferenced events and transactions, temporally explored and correlated with other INT sources allow analysts to exploit pattern-of-life elements to uncover new locations and relationships. These tools continue to develop as analysts find new uses for data sources and develop tradecraft for combining data in unforeseen ways.

23

Pattern Analysis of Ubiquitous Sensors

The “Internet of Things” is an emergent paradigm where sensor-enabled digital devices record and stream increasing volumes of information about the patterns of life of their wearer, operator, holder—the so-called user. We, as the users, leave a tremendous amount of “digital detritus” behind in our everyday activities. Data mining reveals patterns of life, georeferences activities, and resolves entities based on their activities and transactions. This chapter demonstrates how the principles of ABI apply to the analysis of humans, their activities, and their networks…and how these practices are employed by commercial companies against ordinary citizens for marketing and business purposes every day.

23.3 Integrating Multiple Data Sources from Ubiquitous Sensors

Most of the diverse sensor data collected by increasingly proliferated commercial sensors is never “exploited.” It is gathered and indexed “just in case” or “because it’s interesting.” When these data are combined, it illustrates the ABI principles of integration before exploitation and shows how a lot of understanding can be extracted from several data sets registered in time and space, or simply related to one another.

Emerging research in semantic trajectories describes a pattern of life as a sequence of semantic movements (e.g., “he went to the store”) as a natural language representation of large volumes of spatial data [2]. Some research seeks to cluster similar individuals based on their semantic trajectories rather than trying to correlate individual data points mathematically using correlation coefficients and spatial proximities [3].

23.4 Summary

ABI data from digital devices, including self-reported activities and transactions, are increasingly becoming a part of analysis for homeland security, law enforcement, and intelligence activities. The proliferation of such digital data will only continue. Methods and techniques to integrate large volumes of this data in real time and analyze it quickly and cogently enough to make decisions are needed to realize the benefit these data provide. This chapter illustrated visual analytic techniques for discovering patterns in this data, but emergent techniques in “big data analytics” are being used by commercial companies to automatically mine and analyze this ubiquitous sensor data at network speed and massive scale.

24

ABI Now and Into the Future

Patrick Biltgen and David Gauthier

The creation of ABI was the proverbial “canary in the coal mine” for the intelligence community. Data is coming; and it will suffocate your analysts. Compounding the problem, newer asymmetric threats can afford to operate with little to no discernable signature and traditional nation-based threats can afford to hide their signatures from our intelligence capabilities by employing expensive countermeasures. Since its introduction in the mid 2000s, ABI has grown from its roots as a method for geospatial multi-INT fusion for counterterrorism into a catch-all term for automation, advanced analytics, anticipatory analysis, pattern analysis, correlation, and intelligence integration. Each of the major intelligence agencies has adapted a spin on the technique and is pursuing tradecraft and technology programs to implement the principles of ABI.

the core tenets of ABI become increasingly important in the integrated cyber/geospace and consequent threats emerging in the not too distance future.

24.1 An Era of Increasing Change

At the 2014 IATA AVSEC World conference, DNI Clapper said, “Every year, I’ve told Congress that we’re facing the most diverse array of threats I’ve seen in all my years in the intelligence business”

On September 17, 2014, Clapper unveiled the 2014 National Intelligence Strategy (NIS)—for the first time, unclassified in its entirety—as the blueprint for IC priorities over the next four years. The NIS describes three overarching mission areas, strategic intelligence, current operations, and anticipatory intelligence, as well as four mission focus areas, cyberintelligence, counterterrorism, counterproliferation, and counterintelligence [3, p. 6]. For the first time, the cyberintelligence mission is recognized as co-equal to the traditional intelligence missions of counterproliferation and counterintelligence (as shown in Figure 24.1). The proliferation of state and non-state cyber actors and the exploitation of information technology is a dominant threat also recognized by the NIC in Global Trends 2030 [2].
Incoming NGA director Robert Cardillo said, “The nature of the adversary today is agile. It adapts. It moves and communicates in a way it didn’t before. So we must change the way we do business” [4]. ABI represents such a change. It is a fundamental shift in tradecraft and technology for intelligence integration and decision advantage that can be evolved from its counterterrorism roots to address a wider range of threats.

24.2 ABI and a Revolution in Geospatial Intelligence

The ABI revolution at NGA began with grassroots efforts in the early 2000s and evolved as increasing numbers of analysts moved from literal exploitation of images and video to nonliteral, deductive analysis of georeferenced metadata.

The importance of GEOINT to the fourth age of intelligence was underscored by the NGA’s next director, Robert Cardillo, who said “Every modern, local, regional and global challenge—climate change, future energy landscape and more—has geography at its heart”

NGA also released a vision for its analytic environment of 2020, noting that analysts in the future will need to “spend less time exploiting GEOINT primary sources and more time analyzing and understanding the activities, relationships, and patterns discovered from these sources”—implementation of the ABI tradecraft on worldwide intelligence issues.

Figure 24.4 shows the principle of data neutrality in the form of “normalized data services” and highlights the role for “normalcy baselines, activity models, and pattern-of-life analysis” as described in Chapters 14 and 15. OBP, as shown in the center of Figure 24.4, depicts a hierarchical model, perhaps using the graph analytic concepts of Chapter 15 and a nonlinear analytic process that captures knowledge to form judgments and answer intelligence questions. Chapter 16’s concept of models is shown in Figure 24.4 as “normalcy baselines, activity models, and pattern-of-life analysis.” As opposed to the traditional intelligence process that focuses on the delivery of serialized products, the output of the combined ABI/OBP process is an improved understanding of activities and networks.

Sapp also described the operational success story of the Fusion Analysis & Development Effort (FADE) and the Multi-Intelligence Spatial Temporal Tool-suite (MIST), which became operational in 2007 when NRO users recognized that they got more information out of spatiotemporal data when it was animated. NRO designed a “set of tools that help analysts find patterns in large quantities of data” [15]. MIST allows users to temporally and geospatially render millions of data elements, animate them, correlate multiple sources, and share linkages between data using web-based tools. “FADE is used by the Intelligence Community, Department of Defense, and the Department of Homeland Security as an integral part of intelligence cells” [17]. An integrated ABI/multi-INT framework is a core component of the NRO’s future ground architecture [18].

24.5 The Future of ABI in the Intelligence Community

In 1987, the television show Star Trek: The Next Generation, set in the 24th century, introduced the concept of the “communicator badge,” a multifunctional device worn on the right breast of the uniform. The badge represented an organizational identifier, geolocator, health monitoring system, environment sensor, tracker, universal translator, and communications device combined into a 4 cm by 5 cm package.

In megacities, tens of thousands of entities may occupy a single building, and thousands may move in and out of a single city block in a single day. The flow of objects and information in and out of the control volume of these buildings may be the only way to collect meaningful intelligence on humans and their networks because traditional remote sensing modalities will have insufficient resolution to disambiguate entities and their activities. Entity resolution will require thorough analysis of multiple proxies and their interaction with other entity proxies, especially in cases where significant operational security is employed. Absence of a signature of any kind in the digital storm will itself highlight entities of interest. Everything happens somewhere, but if nothing happens somewhere that is a tip to a discrete location of interest.

The methods described in this textbook will become increasingly core to the art of analysis. The customer service industry is already adopting these techniques to provide extreme personalization based upon personal identity and location. Connected data from everyday items networked via the Internet will enable hyperefficient flow of physical materials such as food, energy, and people inside complex geographic distribution systems. Business systems that are created to enable this hyperefficiency, often described as “smart grids” and the “Internet of Things”, will generate massive quantities of transaction data. This data, considered nontraditional by the intelligence community, will become a resource for analytic methods such as ABI to disambiguate serious threats from benign activities.

24.6 Conclusion

The Intelligence Community of 2030 will be entirely comprised of digital natives born after 9/11 who seamlessly and comfortably navigate a complex data landscape that blurs the distinctions between geospace and cyberspace. The topics in this book will be taught in elementary school.

Our adversaries will have attended the same schools, and counter-ABI methods will be needed to deter, deny, and deceive adversaries who will use our digital dependence against us. Devices—those Internet-enabled self-aware transportation and communications technologies—will increasingly behave like humans. Even your washing machine will betray your pattern of life. LAUNDRY-INT will reveal your activities and transactions…where you’ve been and what you’ve done and when you’ve done it because each molecule of dirt is a proxy for someone or somewhere. Your clothes will know what they are doing, and they’ll even know when they are about to be put on.

In the not too distant future, the boundaries between CYBERINT, SIGINT, and HUMINT will blur, but the rich spatiotemporal canvas of GEOINT will still form the ubiquitous foundation upon which all sources of data are integrated.

25

Conclusion

In many disciplines in the early 21st century, a battle rages between traditionalists and revolutionaries. The latter is often comprised of those artists with an intuitive feel for the business. The latter is comprised of the data scientists and analysts who seek to reduce all of human existence to facts, figures, equations, and algorithms.

Activity-Based Intelligence: Principles and Applications introduces methods and technologies for an emergent field but also introduces a similar dichotomy between analysts and engineers. The authors, one of each, learned to appreciate that the story of ABI is not one of victory for either side. In The Signal and the Noise, statistician and analyst Nate Silver notes that in the case of Moneyball, the story of scouts versus statisticians was about learning how to blend two approaches to a difficult problem. Cultural differences between the groups are a great challenge to collaboration and forward progress, but the differing perspectives are also a great strength. In ABI, there is room for both the art and the science; in fact, both are required to solve the hardest problems in a new age of intelligence.

Intelligence analysts in some ways resemble Silver’s scouts. “We can’t explain how we know, but we know” is a phrase that would easily cross the lips of many an intelligence analyst. At times, analysts even have difficulty articulating post hoc the complete reasoning that led to a particular conclusion. This, undeniably, is a very human part of nature. In an incredibly difficult profession, fraught with deliberate attempts to deceive and confuse, analysts are trained from their first day on the job to trust their judgment. It is judgment that is oftentimes unscientific, despite attempts to apply structured analytic techniques (Heuer) or introduce Bayesian thinking (Silver). Complicating this picture is the fissure in the GEOINT analysis profession itself, between traditionalists often focused purely on overhead satellite imagery and revolutionaries, analysts concerned with all spatially referenced data. In both camps, however, intelligence analysis is about making judgments. Despite all the automated tools and algorithms used to process increasingly grotesque amounts of data, at the end of the day a single question falls to a single analyst: “What is your judgment?”

The ABI framework introduces three key principles of the artist frequently criticized by the engineer. First, it seems too simple to look at data in a spatial environment and learn something, but the analysts learned through experience that often the only common metadata is time and location—a great place to start. The second is the preference of correlation over causality. Stories of intelligence are not complete stories with a defined beginning, middle, and end. A causal chain is not needed if correlation focuses analysis and subsequent collection on a key area of interest or the missing clue of a great mystery. The third oft-debated point is the near-obsessive focus on the entity. Concepts like entity resolution, proxies, and incidental collection focus analysts on “getting to who.” This is familiar to leadership analysts, who have for many years focused on high-level personality profiles and psychological analyses. But unlike the focus of leadership analysis—understanding mindset and intent—ABI focuses instead on the most granular level of people problems: people’s behavior, whether those people are tank drivers, terrorists, or ordinary citizens. Through a detailed understanding of people’s movement in space-time, abductive reasoning unlocked possibilities as to the identity and intent of those same people. Ultimately, getting to who gets to the next step—sometimes “why,” sometimes “what’s next.”

Techniques like automated activity extraction, tracking, and data fusion help analysts wade through large, unwieldy data sets. While these techniques are sometimes synonymized with “ABI” or called “ABI enablers,” they are more appropriately termed “ABI enhancers.” There are no examples of such technologies solving intelligence problems entirely absent the analyst’s touch.

The engineer’s world is filled with gold-plated automated analytics and masterfully articulated rule sets for tipping and cueing, but it also comes with a caution. In Silver’s “sabermetrics,” baseball presents possibly the world’s richest data set, a wonderfully refined, well-documented, and above all complete set of data from which to draw conclusions. In baseball, the subjects of data collection do not attempt to deliberately hide their actions or prevent data from being collected on them. The world of intelligence, however, is very different. Intelligence services attempt to gather information on near-peer state adversaries, terrorist organizations, hacker collectives, and many others, all of whom make deliberate, concerted attempts to minimize their data footprint. In the world of state-focused intelligence this is referred to as D&D; in entity-focused intelligence this is called OPSEC. The data is dirty, deceptive, and incomplete. Algorithms alone cannot make sense of this data, crippled by unbounded uncertainty; they need human judgment to achieve their full potential.

NGA director Robert Cardillo, speaking at to the Intelligence & National Security Alliance (INSA) in January 2015, stated “TCPED is dead.” He went on to state that he was not sure if there would be a single acronym to replace it. “ABI, SOM, and [OBP]

— what we call the new way of thinking isn’t important. Changing the mindset is,” Cardillo stated. This acknowledgement properly placed ABI as one of a handful of new approaches in intelligence, with a specific methodology, specific technological needs, and a specific domain for application. Other methodologies will undoubtedly emerge in the efforts of modern intelligence services to adapt to a continually changing and ever more complicated world, which will complement and perhaps one day supplant ABI.
This book provides a deep exposition of the core methods of ABI and a broad survey of ABI enhancers that extend far beyond ABI methods alone. Understanding these principles will ultimately serve to make intelligence analysts more effective at their single goal: delivering information to aid policymakers and warfighters in making complex decisions in an uncertain world.

Notes from Big Data and the Innovation Cycle

Big Data and the Innovation Cycle

by Hau L. Lee

Production and Operations Management
Vol. 27, No. 9, September 2018, pp. 1642–1646

DOI 10.1111/poms.12845

Big data and the related methodologies could have great impacts on operations and supply chain management. Such impacts could be realized through innovations leveraging big data. The innovations can be described as first improving existing processes in operations through better tools and methods; second expanding the value propositions through expansive usage or incorporating data not available in the past; and third allowing companies to create new processes or business models to serve customers in new ways. This study describes this framework of the innovation cycle.

Key words: big data; innovation cycle; supply chain re-engineering; business model innovations

  1. Introduction

Big data and the related methodologies to make use of it: data analytics and machine learning have been viewed as digital technologies that could revolution- alize operations and supply chain management in business and society at large. In the 2016 survey of over 1000 chief supply chain officers or similar senior executives, the SCM World found big data analytics at the top of the list of what these executives viewed as most disruptive to their supply chains.

We have to be realistic and recognize that the use of big data and the associated development of tools to make use of it is a journey. This journey is a cycle that techno- logical innovations often have to go through, and at every stage of the cycle, there are values and benefits, as well as investments that we have to make in order to unleash the power and values.

  1. The 3-S Model of the Innovation Cycle

new technologies often evolve in three stages.

The first one, which I called “Substitution,” is one when the new technology is used in place of an existing one, to conduct a busi- ness activity.

The second one, which I called “Scale,” is one when more items and more activities are used with the technology more frequently and extensively.

The third is the “Structural Transformation” stage, when a new set of re-engineered activities can emerge with the new technology.

  1. The Substitution Stage of Big Data

The availability of big data can immediately allow new methods or processes to be developed to substitute existing ones for specific business activities. An obvious one is forecasting. Much deeper data analytics can now be used to replace previous forecasting methods, making full use of the availability of data. Such data were previously not easily accessible.

  1. The Scale Stage of Big Data

Back in 2011, Gartner has identified the three Vs of big data: Volume, Velocity, and Variety (Sicular 2013). Rozados and Tjahjono (2014) gave a detailed account of the types of data that constituted the 3Vs. There, they described that most of the current usage of big data had been centered on core transactional data such as simple transactions, demand forecasts, and logistics activities.

Manenti (2017) gave the example of Transvoyant, which made use of one trillion events each day from sensors, satellites, radar, video cameras, and smart phones, coupled with machine learning, to produce highly accurate estimates of shipment arrival times. Such accurate estimates can help both shippers and shipping companies to be proactive with their opera- tions, instead of being caught by surprise with either early or late arrivals of shipments. Similarly, Manenti (2017) reported the IBM Watson Supply Chain that used external data such as social media, newsfeeds, weather forecasts, and historical data to track and pre- dict disruption and supplier evaluations.

  1. The Structural Transformation Stage of Big Data

Ultimately, companies can make use of big data to re- engineer the business processes, leading to different paths of creating new products and serving cus- tomers, and eventually, potentially creating new busi- ness models.

Product designers will leverage data on fabric types, plastics, sensors, and most importantly, connectivity with customers. Real and direct customer needs are used to generate new products, identify winners, and then work with partners to produce the winners at scale.

Making use of data on items on web pages browsed by e-commerce shoppers, Sentient Technologies also created machine learning algorithms to do visual cor- relations of items, and delivered purchasing recom- mendations (Manenti 2017). Again, a new product generation process has been developed.

I believe there will be many more opportunities for big data to make similar disruptions to the

  1. Concluding Remarks

it is often the case that a new innovation requires many small-scale pilots to allow early users to gain familiar- ity as well as confidence, ascertaining the values that one can gain from the innovation. Such early usage had often been based on one particular business activ- ity or one process of the supply chain.

 

Notes from Design Thinking and the Experience of Innovation

Design Thinking and the Experience of Innovation

by Barry Wylant

Design Issues: Volume 24, Number 2 Spring 2008

Due to geographic proximity and a linked focus, clusters are useful in enhancing the microeconomic capability of a given region. This occurs through improvements in the productivity of cluster members which enables them to compete effectively in both regional and global markets. The geographic concentration allows for access to capabilities, information, expertise, and ideas. They allow members to quickly perceive new buyer needs, and new technological, delivery, or operating possibilities. This allows members to quickly recognize and identify new opportunities far more readily than those residing outside the cluster. Pressure also exists within clusters. Competition and peer pressure can drive an inherent need for participants to distinguish themselves, and proactively force the pursuit of innovation. Also cluster participants tend to contribute to local research institutes and universities, and may work together to develop local resources collectively and privately in a manner beyond the mandate of local governments and other organizations.

Categories of Innovation

An early writer on innovation, Joseph Schumpeter, distinguished it from invention, and saw it as a far more potent contributor to pros- perity. In Schumpeter’s estimation, inventors only generated ideas, while innovation occurs as the entrepreneur is able to implement and introduce the new idea into a form of widespread use. He referred to this as the entrepreneur’s ability to “get things done,” and saw it as a definitive aspect of the innovation process.

Innovation Triggers

At the scale of the individual, certain conditions can be seen to enhance the pursuit of innovation and creativity. The psychologist Teresa Amabile proposes a componential framework for creativity. She identifies three main psychological components: domain-relevant skills, creativity-relevant skills, and task motivation.

Domain- relevance refers to areas of knowledge and skill embodied by an individual, such as factual knowledge and expertise in a given topic.

Creativity-relevant skills include the typical cognitive styles, work styles, and personality traits that influence how one approaches a particular problem-solving task. Creativity-relevant skills inform the way an individual may perceive, comprehend, navigate, manipulate, and otherwise consider issues and problems in novel and useful ways.  Such skills are further influenced by personality traits such as self-discipline, the ability to entertain ambiguity and complexity, the capacity to delay gratification, an autonomous outlook on the world, and a willingness to take risks.

Task motivation addresses the motivational state in which the creative act is pursued. Intrinsic motivation, is understood as those factors which exist from within the individual’s own personal view. One can be seen as intrinsically motivated in a given task when engagement in that task is perceived as a meritorious end in itself. Extrinsic motivation or external factors such as deadlines, payment, aspects of supervision, etc. are understood as mitigating factors external to the task itself and are imposed externally to the person completing the task.1

Towards the Idea in Innovation

new things can take on a variety of forms such as a product, behavior, system, process, organization, or business model. At the heart of all these “new things” is an idea which is deemed meritorious and, when acted upon, ultimately affects the innovation. To describe an idea as “innovative” suggests that it should be acted upon.

The Idea Experience

Imagination allows us to entertain the notion of the shape of a face evident in the outline of clouds, just as one might see a pattern in the arrangement of bricks on the façade of a building. The viewer cognitively matches the shape of the cloud or the arrangement of bricks to a previously understood concept, that of a particular animal or geometric form such as a circle.

imaginative perception, as evident in the aesthetic experience of architecture, represents the genesis of an idea.

an idea’s constituent elements can be noted. These include a stimulus of some sort, that is, something that could arrest or hold the attention of a potential viewer. The examples above suggest something seen or physical, however, it could be otherwise such as a musical note or the spoken word. Such stimuli exist in settings or contexts, such as a cloud in the sky, a brick in a wall, or a musical chord in a song. And, of course, there must be a viewer, someone who can then perceive and consider the stimulus. It is in the consideration of such stimuli that one can cognitively nest perception within a body of experience and learning that then can inform the comprehension of a particular stimulus and make sense of it in an imaginative way.

The key to the interplay of these idea elements is the capacity of the stimulus to hold one’s attention and engender its consideration.

This ability to flexibly generate different imaginative responses to stimuli is open to influence from a variety of sources, anything that could then prompt one’s reconsideration of the stimulus.

Idea Elements

The idea elements described above can be seen to act within a cognitive mechanism that engenders an idea. Certain historical instances are useful in illustrating how these idea elements work in different ways.

The Considered Idea

The examples noted above echo Krippendorf’s discussion regarding product semantics. Krippendorf postulates that in viewing a given product, one imaginatively contextualizes the perception of that object as a means of comprehending significance.24 In this, the viewer formulates ideas about the object, cognitively placing it into contexts that allow her to formulate an understanding of it. For instance, she might consider how a chair could look in her living room while seeing it in a store. Krippendorf notes that “Meaning is a cognitively constructed relationship. It selectively connects features of an object and features of its (real environment or imagined) context into a coherent unity.” 25 The ability to comprehend a totality of meaning in this is seen in the summation of all potentially imaginable contexts by an individual.

One can arrive at scores of such ideas in the course of the day. Other ideas require more work. Often, the genesis of a useful idea requires that one work through the generation of sequential or chained ideas

Nesting stimuli within contexts is informed to some degree by the conceptual space where that contextualization takes place. Psychologist Margaret Boden states: “The dimensions of a conceptual space are the organizing principles that unify and give structure to a given domain of thinking.”

The extensive knowledge base of a given profession or discipline (as evident in Amabile’s notion of domain relevance skills) provides an example of such conceptual space, where there are accepted normative concepts, standards, and language that underlie the conduct of the discipline. Indeed, even language forms a type of conceptual space where the rules of spelling and grammar allow one to make sense of individual letters and words. As Krippendorf notes, the act of naming something immediately places it within a linguistic context, subsequently making it subject to the rules of language as part of the sense- making process.

The Idea in Innovation

The expression “thinking outside the box” is commonly used in reference to new ideas and innovation. This colloquialism reflects an intuitive understanding of the idea generation process: cognitive contextualization can be seen as a space (or box) for the consideration of a stimulus. Given the intent of the expression, thinking “inside the box” refers to a more pedestrian form of sense-making. The need to make sense of things via fresh contexts and/or stimuli is necessary to break out of the “box.”

Insights into the idea mechanism and the need to think outside of the box can inform the discussion on innovation. For instance, clusters allow individuals to work closely with others in contextually matched endeavors. In this clusters play to chance and serve, through proximity and convenient connectivity, to increase the likelihood that one might consider a given stimulus within a related, yet new and useful, context. This, in turn, can engender a new idea, cultivating the likelihood of any follow-through innovation.

To move beyond imitative and continuous innovations, greater originality is required in the generation of new ideas.

in brainstorming the type of people included, the inherent structuring of the session, the suspension of judgment, and the use of various media to capture ideas, comments, and notions all can be seen as significant in the generation of new ideas. Brainstorming members who come from different backgrounds (sociologists, psychologists, designers, engineers, etc.) are able to draw upon differing creativity-relevant and domain-relevant skill sets.

Within this dynamic, the deferment of judgment is useful because it allows members to continue nesting new ideas as stimuli to subsequent ideas, a process which judgment might interrupt or divert. Further, contributions to the discussion made in a prescribed order also can muzzle the free association between stimuli and useful contexts. According to Kelley, in an effective brainstorming session, ideas are not only verbally expressed but captured via notes, sketches, the quick model, etc.29 These media are useful because they play to people’s different capacities in their individual domain or creativity-relevant skill sets. People will respond to sketches or notes, as stimuli, in differing and original ways leading again to more unique ideas.

Introducing the New Idea

Amabile proposes a creative process in which components of creativity influence activities in different phases. One can see how the execution of domain- or creativity-relevant skills might occur in this, and how motivation can influence the creative result.

Amabile’s notion of creative outcome corresponds to the resulting design itself, which takes form through specification documents and, ultimately, in the launch of a product.

he application of Amabile’s theory is scalable to the type of tasks undertaken, whether they are small interim steps or the entire process. Even within the completion of a single sketch there are aspects of preparation, validation, and outcome, and so the completion of any interim step can be seen as an execution of the larger creative process in miniature. In turn, aspects of all the noted creative activities are apparent in each of the larger phases of Amabile’s overall process. Responses will be generated and validated within the preparation phase, and there will be aspects of preparation in the subsequent phases.

In evaluating the sketch using placements, the designer can learn more about the extent of the design problem, his or her design intent, and the necessity for further exploration.

The continued drive to use one idea as a stimulus to a subsequent one is indicative of curiosity. A significant lesson that can be drawn from design thinking and the consideration of placements is that it is more a process of raising (several) good questions versus one for finding the right answers. That one does not make an a priori commitment in the initial entertainment of a given placement means that it is used to learn more about the issues under consideration. Indeed, that one entertains a placement is indicative of the playful quality inherent in the design pursuit. Given the curios- ity that drives such play, and the skill with which it is executed, an effectively broad range of issues can be raised and duly considered in the development and introduction of innovative new things.

Notes from Policy in the Data Age: Data Enablement for the Common Good

Policy in the Data Age: Data Enablement for the Common Good


By Karim Tadjeddine and Martin Lundqvist

Digital McKinsey August 2016

By virtue of their sheer size, visibility, and economic clout, national, state or provincial, and local governments are central to any societal transformation effort, in particular a digital transformation. Governments at all levels, which account for 30 to 50 percent of most countries’ GDP, exert profound influence not only by executing their own digital transformations but also by catalyzing digital transformations in other societal sectors

The data revolution enables governments to radically improve quality of service

Government data initiatives are fueling a movement toward evidence-based policy making. Data enablement gives governments the tool they need to be more efficient, effective, and transparent while enabling a significant change in public-policy performance management across the entire spectrum of government activities.

Governments need to launch data initiatives focused on:

  • better understanding public attitudes toward specific policies and identifying needed changes
  • developing and using undisputed KPIs that reveal the drivers of policy performance and allow the assignment of targets to policies during the design phase
  • measuring what is happening in the field by enabling civil servants, citizens, and business operators to provide fact-based information and feedback
  • evaluating policy performance, reconciling quantitative and qualitative data, and allowing the implementation of a continuous-improvement approach to policy making and execution
  • opening data in raw, crunched, and reusable formats.

The continuing and thoroughgoing evolution taking place in public service is supported by a true data revolution, fueled by two powerful trends.

First, the already low cost of computing power continues to plummet, as does the cost
of data transportation, storage, and analysis. At the same time, software providers have rolled out analytics innovations such as machine learning, artificial intelligence, automated research, and visualization tools. These developments have made it possible for nearly every business and government to derive insights from large datasets.

Second, data volumes have increased exponentially. Every two years the volume of digitally generated data doubles, thanks to new sources of data and the adoption of digital tools.

To capture the full benefit of data, states need to deliver on four key imperatives:

  • Gain the confidence and buy-in of citizens and public leaders
  • Conduct a skills-and-competencies revolution
  • Fully redesign the way states operate
  • Deploy enabling technologies that ensure interoperability and the ability to handle massive data flows

Because data-specific skills are scarce, governments need to draw on their internal capabilities to advance this revolution. Civil servants are intimately familiar with their department’s or agency’s challenges and idiosyncrasies, and they are ideally positioned to drive improvements—provided they are equipped with the necessary digital and analytical skills. These can be developed through rotational, training, and coaching programs, with content targeted to different populations. The US is building the capabilities of its employees through its DigitalGov University, which every year trains 10,000 federal civil servants from across the government in digital and data skills.

More generally, governments should train and incentivize civil servants to embed data discovery and analytics processes in their workplaces. That means that all civil servants’ end-user computing platforms must feature data discovery and analytics tools.

governments must carry out a major cultural shift in order to break down silos and barriers. Such a transformed culture is characterized by a “test and learn” mind-set that believes “good enough is good to go.”

Cultures that facilitate governments’ data transformations are also characterized by
open, collaborative, and inclusive operating models for data generation and data usage. They facilitate the participation of public agencies, private-sector companies, start-ups, and society as a whole.