Home  |   Jihad Watch  |   Horowitz  |   Archive  |   Columnists  |     DHFC  |  Store  |   Contact  |   Links  |   Search Tuesday, July 22, 2014
FrontPageMag Article
Write Comment View Comments Printable Article Email Article
Font:
What We Don’t Know Can Hurt Us By: Heather Mac Donald
City Journal | Tuesday, April 20, 2004


Immediately after 9/11, politicians and pundits slammed the Bush administration for failing to “connect the dots” foreshadowing the attack. What a difference a little amnesia makes. For two years now, left- and right-wing advocates have shot down nearly every proposal to use intelligence more effectively—to connect the dots—as an assault on “privacy.” Though their facts are often wrong and their arguments specious, they have come to dominate the national security debate virtually without challenge. The consequence has been devastating: just when the country should be unleashing its technological ingenuity to defend against future attacks, scientists stand irresolute, cowed into inaction.

“No one in the research and development community is putting together tools to make us safer,” says Lee Zeichner of Zeichner Risk Analytics, a risk consultancy firm, “because they’re afraid” of getting caught up in a privacy scandal. The chilling effect has been even stronger in government. “Many perfectly legal things that could be done with data aren’t being done, because people don’t want to lose their jobs,” says a computer security entrepreneur who, like many interviewed for this article, was too fearful of the advocates to let his name appear.

The privacy lobby ranges from leftish groups focused on electronic privacy, such as Silicon Valley’s Electronic Frontier Foundation and Washington, D.C.’s Electronic Privacy Information Center, to traditional right-wing libertarian organizations, such as Americans for Tax Reform, the Free Congress Foundation, and the Eagle Forum. Both sides see government as one step from tyranny. They equate privacy with absolute secrecy, and absolute secrecy with liberty, as technology analyst K. A. Taipale has observed. Quixotically, they seek such secrecy for electronic data, even though it is probably the least private thing about us, routinely traded to consumer marketers.

The privocrats only grudgingly acknowledge that terrorism exists, and they never concede that a gain in the public good may justify a concession in “privacy.” Their operating principle can only be formulated as: no use of computer data or technology anywhere at any time for national defense, if there’s the slightest possibility that a rogue use of that technology will offend someone’s sense of privacy. Consequently, they are pushing intelligence agencies back to a pre-9/11 mentality, when the mere potential for a privacy or civil liberties controversy trumped security concerns.

The right- and left-wing libertarians’ triumph began on November 14, 2002, with a New York Times column entitled YOU ARE A SUSPECT, by William Safire. Safire announced that the Defense Department was about to create “computer dossiers on 300 million Americans” that would contain “every purchase you make with a credit card, every magazine subscription you buy and medical prescription you fill, every Web site you visit and e-mail you send or receive, every academic grade you receive, every bank deposit you make, every trip you book and every event you attend. . . . ”

This “Orwellian scenario,” explained Safire, represented the “20-year dream” of former Reagan national security advisor John Poindexter. Poindexter’s conviction for misleading Congress about his role in the Iran-Contra scandal was a mere prelude, Safire said, to his present project to “snoop on every public and private act of every American.” That project, called Total Information Awareness (TIA) and run out of the Pentagon’s prestigious Defense Advanced Research Projects Agency (DARPA), was mere “weeks” from completion. The Senate must act now, Safire warned, to shut it down and stop Poindexter’s “sweeping theft of privacy rights.”

Bingo. Within hours of publication, the column set off a frenzy of editorializing about the Bush administration’s imminent police state. Within two months, the Senate had voted to ban deployment of TIA, though it provisionally allowed research to continue. By August, Poindexter had resigned; in September, Congress shut down DARPA’s research effort entirely.

Not bad for a tissue of fabrication. Safire’s depiction of TIA research as the megalomaniacal agenda of one controversial man completely distorted the project, which would have been run by intelligence analysts, not Poindexter. Many of its components predated Poindexter’s arrival at DARPA. Safire’s invocation of 300 million “dossiers” was equally fanciful. TIA’s realization was years, not “weeks,” away. But most egregiously, by not mentioning one word about terrorism, Safire omitted entirely TIA’s purpose, presenting it simply as a gratuitous effort to spy on Americans.

The goal of TIA was this: to prevent another attack on American soil by uncovering the electronic footprints terrorists leave as they plan and rehearse their assaults. Before they strike, terrorists must enter the country, receive funds, case their targets, buy supplies, get training, and send phone and e-mail messages. As the event nears, the pace of activity will quicken: cell members call to synchronize their schedules (the pre-attack “chatter” that surveillance agencies nearly always pick up); they make last-minute purchases; they confirm that the coast is clear. Many of those activities will leave a trail in electronic databases, which will register a spike in transactions right before an assault. TIA researchers hoped that cutting-edge computer analysis could find that trail in government intelligence files, whose exponential growth overwhelms the ability of analysts to understand what they contain. TIA developers would also test whether enriching that intelligence with certain commercial transaction records would increase the chances of detecting terror planning.

TIA would have been the most advanced application yet of a young technology called “data mining,” which attempts to make sense of the explosion of data in government, scientific, and commercial databases. Through complex algorithms, the technique can extract patterns or anomalies in data collections that a human analyst could not possibly discern. For example, Usama Fayyad, a pioneer in the field, used the method to solve a problem that had bedeviled astronomers for decades. He coaxed computers to sift through 30 years’ worth of sky images—encompassing 2 billion sky objects—and classify them as stars or galaxies on the basis of 40 variables, such as shape, size, and brightness. Since then, public health authorities have mined medical data to spot the outbreak of infectious disease (and are preparing to do the same for bioterror attacks), banks have detected money laundering, credit-card companies have found fraudulent credit-card purchases with the technique, and consumer-products firms target their advertising by analyzing what type of customer buys their products.

Fayyad, who founded Microsoft’s Data Mining Exploration Group, says that data mining is the key for navigating the new digital universe. What still needs to be done to hone the technique? I asked. “Everything,” he replied. “It’s like at the time of Columbus. There’s a great big ocean in front of us, and no one knows what’s on the other side. Right now, we don’t know what data is, or what it means. A few companies have waded into the ocean, but they really don’t know what the potential is.”

So hysterical were the attacks on TIA that followed Safire’s column that it was often hard to grasp the exact basis of the objections. This much was clear: data mining was a dangerous, unconstitutional technology, and the Bush administration had to be stopped from using it for any national security or law enforcement purpose. Senators Russ Feingold, Jon Corzine, and Ron Wyden introduced wildly technophobic bills banning data mining for national defense.

Without question, TIA represented a radical leap ahead in both data-mining technology and intelligence analysis, not surprising for a visionary group like DARPA, which created the Internet. The project would comb data from highly disparate activities, people, and associations, to predict exceedingly rare events. Had it used commercial data, it would have given intelligence agencies instantaneous access to a volume of information about the public that they had never before had. As with any public or private power, TIA’s capabilities could have been abused—which is why DARPA planned to build safeguards throughout the system. But it differed from existing law enforcement and intelligence techniques only in degree, not kind. Though the scale of data it would have made immediately available to government was unprecedented, the type of evidence was identical to what government had had legal access to for decades.

The swirl of rhetoric against TIA acknowledged none of these facts. What it appeared to assert—and it takes a real effort to discern coherent themes—is this:

•Don’t touch commercial data!

In addition to allowing the government to mine its own intelligence databases, TIA also proposed to see if rapid government access to commercial data banks would improve the chance of spotting terror planning. Consumer data has become such a hot commodity that outfits known as data aggregators buy entire data banks from companies like MasterCard and Marriott, mix them with publicly available data from phone books or title companies, say, and then sell access to their mega-database to marketers seeking a comprehensive view of the American consumer. Anyone with enough cash can find out what someone’s mortgage payments are, what restaurants he frequents, what debts he owes and where he banks, whether he subscribes to American Rifleman or Martha Stewart Living, and whether he’s more likely to visit Graceland or Greenland, among a thousand other features of his life.

Why DARPA’s interest in commercial repositories? Because that is where the terror tracks are. Even if members of sleeper cells are not in government intelligence databases, they are almost certainly in commercial databases. Acxiom, for example, the country’s largest data aggregator, has 20 billion customer records covering 96 percent of U.S. households. After 9/11, it discovered 11 of the 19 hijackers in its databases, Fortune magazine reports. The remaining eight were undoubtedly in other commercial banks: data aggregator Seisint, for example, found five of the terrorists in its repository.

Had a system been in place in 2001 for rapidly accessing commercial and government data, the FBI’s intelligence investigators could have located every single one of the 9/11 team once it learned in August 2001 that al-Qaida operatives Khalid al-Midhar and Nawaq al-Hazmi, two of the 9/11 suicide pilots, were in the country. By using a process known as link analysis (simpler than data mining), investigators would have come up with the following picture: al-Midhar’s and al-Hazmi’s San Diego addresses were listed in the phone book under their own names, and they had shared those addresses with Mohamed Atta and Marwan al-Shehi (who flew United 175 into the South Tower of the World Trade Center). A fifth hijacker, Majed Moqed, shared a frequent-flier number with al-Midhar. Five other hijackers used the same phone number Atta had used to book his flight reservations to book theirs. The rest of the hijackers (who crashed in Pennsylvania) could have been tracked down from addresses and phones shared with hijacker Ahmed Alghamdi, a visa violator—had the INS bothered to locate him before the flight by running his name on its overstayer watch list.

Privacy advocates say that giving the government access to data held by commercial third parties violates constitutional privacy rights. They are wrong. The Supreme Court has repeatedly said that the government may obtain business and other records held by third parties without warrant or probable cause, because those records are no longer private. Law enforcement officials may subpoena records, or request that they be provided voluntarily, or may simply purchase data repositories on the market like any other player in the digital economy.

Nevertheless, despite the decidedly un-private state of commercial data, DARPA scientists were building privacy protections into TIA. They were testing whether the identity of individuals picked out by a terror search of a database could be concealed until sufficient evidence justified their revelation. Filters would automatically remove information irrelevant to the investigation. Moreover, only authorized users could access the commercial data, with their searches recorded for subsequent audits. Anyone abusing the system to look up the credit history of his ex’s new husband, for example, would be punished. Safire’s incendiary column said nothing about any of this, of course.

•No pattern analysis allowed.

Going beyond link analysis from known suspects, TIA inventors hoped to spot suspicious patterns in data even before they could identify any particular suspect. For example, on 9/11, the airline-passenger profiling system flagged as suspicious nine of the 19 hijackers as they attempted to board, including all five terrorists holding seats on American Airlines 77, which flew into the Pentagon; three of the hijackers on American Flight 11; and one hijacker on United Flight 93, which crashed in Pennsylvania. Security procedures at the time prohibited airport personnel from interviewing flagged passengers or hand-searching their carry-on luggage—a mad capitulation to the civil liberties and Arab lobbies.

Instead, a machine would have scanned the checked luggage of the nine flagged hijackers for explosives, and an airport agent would have confirmed that they actually boarded with their bags. But had a pattern-recognition system been in place—and assuming that five flagged passengers on one flight was an abnormal pattern—authorities might have investigated further and noticed that the five flagged passengers were all Middle Eastern men. Link analysis would then have shown extensive connections among them. Had security agents overcome their fear of a racial profiling charge, they might have interviewed the five and found troubling inconsistencies in their stories, meriting further inquiries.

The “privacy community,” as they like to call themselves, will have none of this. They claim that looking for patterns of suspicious behavior before having any particular suspect in mind is unconstitutional. In conventional police work, they say, the government starts from a known suspect or actual crime. But in data mining, the government may begin from a suspicious pattern spotted in data—the purchase of large amounts of bomb-making chemicals, say, together with a visa overstay, extended tours in Afghanistan in 1999 and 2000, and the rental of a Ryder truck—before having any suspects or actual crimes in view. This latter technique, the ACLU and other civil libertarians say, makes government investigations potentially limitless. “Pattern-matching investigates everyone,” complains Priscilla Regan, a government professor at George Mason University, “and most people who are investigated are innocent.”

But pattern analysis, as distinct from “particularized suspicion,” has always been integral to crime fighting. Experienced FBI agents and police officers often try to predict future events—the site of the next bank robbery, say—by analyzing previous crimes and figuring out the crooks’ modus operandi. New York engineered the greatest crime drop in its history by using its Compstat computer system to spot crime patterns. What a cop on the beat may observe as suspicious—furtively walking back and forth in front of a jewelry store—is based on generalizations from previous heists. Whether a cop observing a suspicious pattern of behavior can accost the suspect or search him depends on the strength of the evidence suggesting criminal intent. But the situation is no different when the behavior in question is “observed” in a database: whether to investigate the individual further rests on the usual standard-of-proof question that all law enforcement officers face, as K. A. Taipale has argued.

Opponents of data mining demand that evidence in databases be granted a degree of anonymity and inaccessibility far beyond other types of evidence, but they offer no justifications for why an FBI agent should treat a sales record in a database differently from a sales receipt in a merchant’s drawer or from his own observation of the sales transaction. By their reasoning, in fact, police departments should not send extra officers to high-crime neighborhoods, for that would be to deploy them on the basis of patterns and predictions, not “individualized suspicion,” and most of the people they would observe would be innocent. Finally, having your data instantaneously scanned by a computer is not tantamount to being “surveilled.” Assuming that the entries were not flagged for further human investigation, no one has “investigated” you. The computer has no idea what those zeros and ones represent. In fact, computer searching of data protects privacy more than a Sam Spade–style hand search through title deeds, say, or hotel registries, which does in fact sentiently peruse records of the innocent as well as the guilty.

•It will work too well—or not at all!

It’s okay for Home Depot to buy my digitized credit-card receipts, says the privacy “community,” to see whether I would be a soft touch for a riding mower. But if government agents want to see who has purchased explosive-level quantities of fertilizer, they should go store to store, checking credit-card receipts. Data-mining opponents would deny terror investigators a technology in common use in the commercial sector, simply because they think government should be kept inefficient to limit its power, a Luddite’s approach to public policy. Remember: data mining would only speed government access to records to which it is already legally entitled. When a technology offers possibly huge public benefits, the rational answer to the fear of its abuse is to use technology to build in safeguards.

Consistency being no constraint, privacy advocates were simultaneously advancing the equal and opposite argument that TIA was just a pipe dream, unlikely to accomplish its stated goals. But that is just what the research was trying to find out; to cut off an experiment likely to yield at the very least critical computing breakthroughs is benighted.

Throughout 2003, the drum roll against TIA continued in congressional hearings and press conferences. Then, in August 2003, an unsettling DARPA anti-terror notion came to light—a projected futures market in predicting destabilizing geopolitical events, such as wars, assassinations, and terror attacks. Since markets are highly efficient at aggregating information, went the idea—still just blue-sky theorizing—a bad-news predictions market would give intelligence analysts access to knowledge about the world that they might otherwise miss. After all, political elections markets—really little more than highly formal betting operations—have proven far better at predicting vote outcomes than pundits. The goal of DARPA’s FutureMap was to avert human destruction, but understandably it was almost universally condemned as incentivizing death, should terrorists infiltrate the market and use it as an insurance policy. Poindexter did not lead the project, but within 48 hours, he was forced to resign. Weeks later, Congress shut down his entire DARPA office and, with it, TIA research. Ecstatic privocrats danced on TIA’s grave.

Poindexter’s resignation reveals how little the country’s priorities have changed since the al-Qaida attacks: he is the only government employee to be fired for national security reasons since 9/11. The message could not be clearer: no one need fear for his job for failing to protect the nation. But embark on a lifesaving endeavor that may, years in the future, if abused, push the envelope of privacy protections, and you’re gone, as Stewart Baker, former general counsel to the National Security Agency, recently testified. The FBI bureaucrats who decided in May 2001 that investigating al-Qaida affiliates in flight schools would constitute racial profiling still draw their government checks; the Justice Department functionaries who kept FBI agents in New York from hunting for ringleaders al-Midhar and al-Hazmi still enjoy their supervisory perks; the FBI paper pushers who refused on bogus legal grounds to let agents in Minnesota search Zacarias Moussaoui’s possessions still opine on the law—and, of course, CIA director George Tenet and FBI director Robert Mueller, who presided over the worst intelligence failure in American history, are still in place.

Watching Poindexter’s and TIA’s demolition, the computing world rationally concluded: let’s not go there. “People and companies won’t enter into technology research [involving national security computing] because of the privacy debates,” says a former FBI agent and chief privacy officer for a major electronic information firm. Many scientists shake their heads at the overreaction. Usama Fayyad says: “If I were worrying about defense, data mining is such an obvious target for research. It’s an area where we must maintain an advantage.”

The national security carnage was just beginning. In the wake of Safire’s success in mortally wounding TIA, his New York Times colleague Maureen Dowd decided to join the privacy crusade, objecting, with characteristically sneering know-nothingism, to another DARPA national defense project: Human Identity at a Distance. The project’s goal was a video device that could recognize human beings at 500 feet from their gait and other biometric features. Intended users: U.S. embassies and other critical government installations. Let’s say someone had circled the U.S. embassy in Morocco three days in a row, particularly examining the gate. If a different guard were on duty each time, no one would recognize the repeat visits. But the camera would connect the dots and would alert authorities. Moreover, if the possible terrorist had been recorded previously in the Sudan, say, the device might also identify him from his walk or other features.

In May 2003, Dowd derided the project as something out of Monty Python. She portrayed it as the personal project of John Poindexter, with Poindexter appearing this time as a leering Peeping Tom. “I don’t want John Poindexter tracking my body part contours,” she sniffed, with true baby-boomer self-obsession, incapable of imagining any public issues outside herself. But even if such a camera did photograph her, it would record only what any passerby could see. There would be no privacy violation whatsoever.

Nevertheless, Human Identity at a Distance is no more, terminated by DARPA, still reeling from the public-relations disaster of TIA.

The following month, Safire himself turned his guns on yet another DARPA project—with equally lethal success. His column, DEAR DARPA DIARY, was arguably even more fanciful than YOU ARE A SUSPECT. It targeted LifeLog, a highly ambitious project to teach computers to record, analyze, and learn from a user’s experience. A camera hooked up to a small computer would record the user’s activities; the user could also input documents or dictate memos. This cyber-diary would then analyze these materials for subsequent recall.

The ultimate user LifeLog scientists envisioned was a battlefield commander. Let’s say a special-forces unit, having fought Taliban holdouts in the caves of Afghanistan, has returned to base to be debriefed so that the next platoon will have a complete sense of what happened and what awaits them in the area. Worn by soldiers, LifeLog would supplement their fallible memory and perceptions with a well-organized record of the battle.

Out of this reasonable, if speculative, idea—not so dissimilar from the use of artificial intelligence in medical diagnostics—Safire concocted a completely fictional “national memory bank,” run by—you guessed it—Big Brother Poindexter. According to Safire, civilian LifeLog “user-spies,” equipped with hidden wires, would secretly “snoop” on . . . everyone. The contents of every LifeLog would then be dumped into a “national memory bank,” which would have “undeniable recall of everything you would just as soon forget.” Poindexter would be squirreled away in the “basement of the Pentagon,” sifting through the bank of secrets.

DARPA had no stomach for another privacy controversy and killed the project. Battlefield leaders will just have to make do. But as a DARPA scientist observes: “Just because we can’t pursue this technology doesn’t mean the Chinese will stop. Right now, we have technological superiority when we go into battle. We know what’s going on better than our enemies because of smart weapons and sensors. In ten or 20 years, though, we could lose that edge.”

The privacy vigilantes now have in their sights an airline-passenger screening system and an interstate network to share law enforcement and intelligence information. Both projects could go down any minute. As to whether that would be in the national interest, readers should ask themselves if they would be happy to fly seated next to Mohamed Atta. If yes, they needn’t worry about the cancellation of the Computer Assisted Passenger Prescreening System (CAPPS II). And if they don’t care whether police can track down a child abductor within minutes of his crime, then they shouldn’t care about the crippling of the Multistate Anti-Terrorism Information Exchange, either.

But those who want terrorists kept off planes will find the privacy crusade against CAPPS II worrisome indeed. Responding to a November 2001 congressional order to develop such a system, the Transportation Security Administration (TSA) came up with a two-step process: verifying the passenger’s identity and assessing his risk. When making reservations, an airline would collect fliers’ names, addresses, birth dates, and phone numbers, and send them to the TSA for forwarding to a commercial data aggregator. Checking the information against its own databases, the data aggregator would send identity authentication scores back to the TSA: high if the passenger’s information can be verified, low if no commercial database has a match. Such a system is not immune from the threat of identity theft, of course, but additional safeguards can be added later.

Next, the TSA would check the passenger’s proffered identity against government intelligence databases. That information, combined with the identity authentication scores, would divide passengers into acceptable risks (green), unknown risks (yellow), and unacceptable ones (red). At the airport, green and yellow passengers would receive boarding passes, but the yellows would get rigorously screened. The reds would have to wait for law enforcement agents to determine if they could proceed.

The crusade against CAPPS II is a textbook case of privacy charlatanism. The formula: 1. identify hated program with TIA; 2. mischaracterize its details; 3. charge it with specious privacy and other rights abuses; and 4. provide no reasonable alternatives.

Nowadays, if you can get the words “Total Information Awareness” and the unwanted program’s name in the same sentence, you’re halfway to demolishing it. The Electronic Privacy Information Center (EPIC) hits all the right notes in its description of CAPPS II: “CAPPS II shares many of the same elements of the Defense Department’s ‘Total Information Awareness’ program, which profiles innocent people. . . . [It relies] on experimental data-mining algorithms to find patterns in the government and commercial databases available on individuals.”

Nothing in that statement is true. CAPPS II has nothing to do with data mining; it is a two-step database query system. It does not use “experimental data-mining algorithms to find patterns” in databases; it merely checks to see if any given subject is present in the applicable database.

The falsehoods pile up higher still. In March 2003, the Electronic Privacy Information Center’s Cédric Laurant told the European Parliament why it should refuse to cooperate with CAPPS II. The system would result in “widespread spying,” he said, by giving TSA “access to financial and transactional data, such as credit reports and records of purchases, confidential business records, [etc.].” Not so: TSA would see none of the commercial data that the data aggregators would use to verify a passenger’s identity; all the data would stay in the aggregator’s database.

The ACLU claims that CAPPS II would likely discriminate against minorities by using credit scores to rank a flier’s risk; such scores, according to the ACLU, have a “well-documented bias against minorities.” But even the ACLU admits that TSA has denied any intention of using credit scores to assess risk. Still, the ACLU contends, nothing the government has said so far actually “bars” it from doing so.

To finish reading this article Click Here.


Heather Mac Donald is a contributing editor of City Journal and the John M. Olin Fellow at the Manhattan Institute. Her latest book, coauthored with Victor Davis Hanson and Steven Malanga, is The Immigration Solution.


We have implemented a new commenting system. To use it you must login/register with disqus. Registering is simple and can be done while posting this comment itself. Please contact gzenone [at] horowitzfreedomcenter.org if you have any difficulties.
blog comments powered by Disqus




Home | Blog | Horowitz | Archives | Columnists | Search | Store | Links | CSPC | Contact | Advertise with Us | Privacy Policy

Copyright©2007 FrontPageMagazine.com