Biometrics And Airport Security February 17, 2003 - Backgrounder PrivacyActivism staff counsel Linda Ackerman spoke on biometrics and airport security at the Transportation Research Board (TRB) panel on Personal Security in Washington, D.C., on January 13, 2003. Introduction I’m not asking this facetiously, but since many of you flew to this meeting and probably fly fairly often, I’m wondering, do you feel more secure now than you did 6 months ago? Also—do you feel that any of your civil rights and liberties are being stepped on, or do you feel that any measures taken in the name of security are OK? I’m going to talk about some of the biometric security systems that are either being tested at airports or are being considered for possible testing. I’ll discuss the system, the problems it presents, whether it can be fooled, and what I think some of the legal implications are of using it. First of all, what is biometrics? It’s pattern recognition—the pattern of your features—fingerprints, hand, face, voice, eyeball—and probably, inevitably, your DNA. A biometric system extracts a certain preordained number of specific features from raw data and discards everything else, and then encodes those features to produce a template or sample. The function of biometrics is to authenticate an identity that’s already stored in a database. A biometric cannot IDENTIFY anyone on its own—there must be information in a database to match the fingerprint or face scan. Questions for analyzing security How do we analyze the viability any of security measure—including biometrics? I think Bruce Schneier of Counterpane asks the right questions: * What problem does a security implementation solve? * How well does it solve the problem? * What new problems does it create? * What are its economic and social costs? * Given the above, is it worth the costs? Biometric solutions are being heavily promoted by the companies that make them. But as far as large-scale security operations go—like airport security—biometric systems appear to be stalled because they’ve failed so badly in early tests. A priori problems with biometrics There are a number of problems with biometric identifying technologies right out front: * they all present enrollment problems—that is, getting the control sample into the database. It’s especially problematic to enroll the bad guys you want to screen against * the systems can be fooled—some of them very easily * the databases in which any bioidentifiers are stored are always vulnerable * they raise questions about how many constitutional rights we can be required to give up, or are willing to give up, to get on a plane In other words, to go back to Schneier, biometric security implementations, fall critically short of solving the problem of identifying bad actors, and they create new problems, and, in my view, the social costs of becoming a surveillance society that banks ever more personal information about its citizens are prohibitive. Current status of biometric security applications How is biometric security doing so far in the real world? Facial recognition: Visionics facial recognition software was tested last May at Palm Beach airport. It failed to correctly identify enrolled airport employees 53% of the time. It misidentified 1% of people as being on a "wanted" list, which adds up to 100 people wrongly stopped at a flight gate that handles 10,000 people a day. The false negative rate was 50%, which means that 50 per cent of "wanted" individuals got through. Lighting variations and people moving their heads between 15 and 30 degrees away from the camera’s focus significantly affect the system. You need a database of known terrorists to match against; and the problem with that is that we don’t know who they are. And on the over-all reliability that we can expect from facial recognition, I quote James Wayman, Director of the National Biometrics Test Center at San Jose State. He said that if a photo had been taken of a non-disguised Osama bin Laden entering an airport, the chance of identifying him with a facial recognition system would have been about 60%. Visionics says there will always be a compromise between falsely accepting someone as OK and falsely rejecting them. Either way, it doesn’t solve the problem very well, creates serious problems of its own—whether they’re false positives or false negatives, runs to unknown economic costs, including lawsuits—unless Congress decides to relieve everyone involved of liability—and raises First and Fourth amendment issues. Facial recognition systems are also easy to fool—and I recommend reading the report on a series of tests done on various biometric devices by a German group—all my source information will be on the web at www.PrivacyActivism.org. One of the security measures built into facial recognition is liveness. Is this a live person whose face is being scanned? This group mimicked liveness with a digital video of a face on a laptop screen. This wouldn’t get past a human, but it was enough to convince a scanner. Jim Wayman says that at best facial recognition systems limit the range of possible matches to a third of all possible positive match candidates—that’s bin Laden and his 60% chance of getting past the system. Is that success rate worth the cost of even a 1% rate of misidentification—especially when there’s no administrative or legal remedy for misidentification? Fingerprints: What about fingerprints? This is the favored biometric because it’s the most familiar and huge databases already exist and are available for matching. The FBI alone has about 70 million prints on record. In fact, with only about 40 characteristic to compare they are the least reliable biometric and seem to be the easiest to fool. Enrollment in the system is also problematic, because a single individual’s prints are very variable. Older people and children produce much less distinct fingerprints. Fingerprints change significantly over a six week period and are subject to environmental degradation. They’re affected by chemicals—for example, working regularly with cleaning or hair coloring products can obliterate prints. You need two prints for an accurate match—the error rate with a single print is an unacceptable 2%—but again, according to Wayman, two fingerprints taken from the same person in a row are never the same. There’s also a story that Bruce Schneier reported last May in his email newsletter Crypto-Gram, called Gummi prints: Fun with Fingerprint Readers. This was about a Japanese cryptographer who made gummi prints from the same gelatin used for Gummi bears. The fake print fools fingerprint detectors about 80% of the time—even when they’re watched by guards. The same cryptographer also captured latent fingerprints left on glass, and through a complicated but do-able process, made a gelatin finger using the print and again, fooled fingerprint detectors about 80% of the time. I think we can dismiss fingerprints—certainly if they’re used alone. They’re too easily fooled to solve any more serious security problems than logging on to your home computer. Iris scans: What about iris scans? This is where a camera examines your retina for identifying characteristics. It’s considered a reliable identifier because it has 266 characteristics to verify—far more than any other biometric—and they don’t change from the time you’re a year old. The problem is that the scan requires you to stare into a camera, which makes it difficult to enroll people voluntarily. For checking after enrollment, variable lighting conditions affect accuracy. Also, there's no storehouse of iris prints such as there is for fingerprints, so even if you can enforce the “enrollment” process, you run into the basic biometric problem when you’re looking at large scale security: they can only authenticate an ID by comparing newly scanned information with information already stored in a database. You could build a database quickly enough by installing iris scanners in major airports, but why would any terrorist voluntarily enroll himself in an iris scanner database? Also, while iris scans are good, they don’t have a zero error rate, so there would still be false positives and false negatives—which poses social and legal costs. Finally, iris scanning systems are the most expensive of the biometric technologies currently available, so they would require extensive testing for cost-effectiveness. The German group also fooled an iris scanner, though in a way that could work only without human monitoring. They created a print of a digitally captured iris, cut holes for the pupils in order to simulate liveness, and were able to get by undetected. So iris scans could solve the problem they’re intended to solve, but most likely only on a limited scale, with a library of scans of people permitted to access a facility—for example, as part of a set of trusted traveler identifiers. Body scans: Then there was the Fly Naked scan, tested last August as a voluntary speedier alternative through security at Orlando Airport. This was a digital strip search, done by an Xray that penetrates only the clothing but not the skin, and reveals the body exactly as it is. Porno security. There was enough instant outrage over the intrusiveness of this one that the manufacturer went back to the drawing board to redesign it so that no sexual characteristics were displayed. I find it surreal that such a system was ever tested at all. Whatever problems this system solves, they’re nothing compared to the problems it creates—including public outrage and what seems to me a clear violation of whatever is left of 4th amendment protection against unreasonable search and seizure—of an image of your naked body, which was stored and later shown on TV when the story was reported. Analysis So those are some of the current biometric solutions being either tested or proposed for airport security. Let’s put them to Schneier’s test: 1. Do they solve the problem they’re intended to solve? That is, making airports and planes secure against terrorist acts? How can they when in order to verify that someone is a terrorist, they’d have to be enrolled in the database being checked against. It’s been suggested that one reason for the “special registration” round up of Middle Eastern men is to create a comparison database of fingerprints and photographs of possible terrorists. Though again, why would a terrorist show up to register with the INS? 2. How well do biometric systems solve the problem? If there are no biometric samples of terrorists’ photographs or irises in the database, they don’t solve the problem at all. And that remains the same even if they are coupled with other forms of identification, as they would have to be anyway. That is, with another biometric, or with some state-issued form of ID. 3. What problems do biometrics create? First of all, they create a false sense of security that they’re actually doing the job that they’re intended to do—which means that other methods, perhaps lower-tech or less invasive methods that could provide better security, will not be tried. And I don’t mean to say that biometrics don’t work in some situations. They’d probably work adequately where you have a limited population that’s entitled to access something—a facility or a financial database, for example—and it’s possible to enroll the entire population that should have access. Second, the whole process of collecting and storing what I consider personal, non-public physical data, such as the pattern of my iris, will become routine and unquestioned. There’s an article by Philip Agre, titled "Your Face Is Not a Bar Code," which argues against facial recognition as the basis of pervasive tracking of individuals by government and commercial interests—a result that fundamentally alters a democratic society that depends on individual privacy and anonymity, on the ability to think and act freely—for better or for worse—without the profoundly chilling knowledge that someone, or something, knows everything you do. Third, they create immense databases, and NO DATABASE IS SECURE—I think of Total Information Awareness as a neon sign that says, "Hack me." Just in the last month, there have been two major examples of what PrivacyActivism calls Data Valdez: the sale of 30,000 passwords to consumer credit information by an employee of Teledata Communications and the theft of hard drives containing the medical records—including SSN and credit card information—of 500,000 military personnel and their families. Then there’s the matter of function creep, or the slippery slope: once digitized bioidentifiers of the entire flying population—or any population—are collected in a database, what else could they be used for? Nabbing people delinquent in paying taxes, child support—parking tickets? Or just plain physical tracking. And finally, the most radical consequence of total, biometrics-enhanced surveillance—that our rights of privacy, freedom of thought, speech, and association, the ability to move about freely, along with our right to be free from unreasonable searches and seizures—undertaken with at least a figment of probable cause, and the presumption that we are innocent until proven guilty—will no longer have meaning—in law or in our society. 4. What are the economic costs of widespread implementation of biometrics? We don’t know yet, but here’s one example of a biometric ID system that was abandoned because it cost more to implement than using it saved. This is also from James Wayman, discussing INSPASS—INS Passenger Accelerated Service System. Starting in 1993, INSPASS kiosks to measure hand geometry were installed in eight international airports in the U.S. The digital measuring machine could confirm the identity of those already enrolled in the INSPASS system. Wayman thought this was a great system. It took half an hour to enroll his hand, but there was no charge to him and once he was enrolled, he never had to wait in an immigration line again. The system was working at San Francisco International until the new international terminal opened in December 2000. The INS moved the machines to the new terminal, but never turned them on. He believes that the costs of enrolling his hand—as well as the computing overhead of checking it against an entire database—were greater than whatever the INS saved in not taking up the time of interviewers to process people re-entering the U.S. He’s probably right, because the previous March, the DOJ’s Inspector General reported that the benefits of INSPASS were "insignificant." Wayman says that at the very least, this shows that such systems are very difficult to implement, and that Congress should keep that in mind with regard to the Enhanced Border Security Act. This calls for the INS to take a biometric identifier, like a fingerprint or a face scan, from every alien entering the US by 2005—that’s about 35 million people a year. There’s no reason to expect that another large-scale biometric system will be any more cost-effective than INSPASS. 5. Finally, given all of the above—are the costs too great? I think they are. I think biometrics are a bad idea and also that they won’t do much for airport security or any other large-scale security operation. But I also think their implementation in a variety of contexts, including airport security—given the biometics industry’s fervent hyperbole about biometrics as THE SOLUTION—is all but inevitable. Implement Fair Information Practices along with biometric security With that in mind I propose the palliative of regulated implementation, through fair information practices, on the collection and use of biometric data. 1. We should have the right to control the creation and use of our biometrics. They’re far more personal than Social Security numbers or even financial data because they’re digitized pieces of our physical identity. If our finger prints are stolen, we can’t change the ones we have. 2. Biometrics should be used only for their intended purpose and we should know exactly what that is. 3. Access should be controlled, limited, and granular—meaning that anyone with access should see only what they’re required to know. 4. Data should be stored no longer than is necessary for its use. Unfortunately, in the case of airport security, this would be until death—which would add another element of cost and computing overhead—checking current data against who’s dead. 5. Biometric data should be segregated from other personal identifying information to minimize the risks of broad access. Call for debate on the necessity of privacy as well as security in a democratic society But what I really want, now, is a debate on institutionalizing surveillance that doesn’t dismiss privacy in the name of national security or fighting terrorism. We have problems, but we ignore the fact that the continuity of a free society is one of them at our peril. I hope that you’ll all take part in calling for this debate. More information is available at . Last updated February 20, 2003