text-only page produced automatically by LIFT Text
Transcoder Skip all navigation and go to page contentSkip top navigation and go to directorate navigationSkip top navigation and go to page navigation
National Science Foundation
design element
News From the Field
For the News Media
Special Reports
Research Overviews
NSF-Wide Investments
Speeches & Lectures
NSF Current Newsletter
Multimedia Gallery
News Archive

New Approaches to Airport Security: Program Transcript

Call-In Program
Thursday, September 28, 2006
National Science Foundation
Arlington, Virginia

On Sept. 28, 2006, experts in sensor technology, behavioral screening and biometrics participated in a call-in program on the latest developments in airport security. This is the transcript.

Mr. McCourt: This is Richard McCourt at the National Science Foundation in Arlington.
I am a program officer here at NSF, and I will be hosting today's discussion on new approaches to airport security.

We have three scientists on the line today to talk about their research and answer questions from the journalists who have called in and are listening right now. Each of them -- each of the scientists is funded by NSF for basic research and engineering and computer science, or both.And the hope is that, someday, they may find a way for that research to make its way into our airport security system.

Airport screening is something that affects huge numbers of people every day, of course. And the technology is also used in other places, such as border crossing. The common theme of these scientists is that they deal with new technologies that could make screening more effective and, one would hope, less cumbersome than it is now.

And these researchers are online, and I will introduce them now. First, we have Dr. Rick Donovan from the Department of General Engineering at Montana Tech University. Dr. Donovan, you are there?

Dr. Donovan: I certainly am.

Mr. McCourt: And next is Dr. Mark Frank, an associate professor in communication at the University of Buffalo. Dr. Frank?

Dr. Frank: Yes, I am here. Thank you.

Mr. McCourt: Okay. And, finally, Dr. Arun Ross, an assistant professor of computer science and electrical engineering at the University of West Virginia. Welcome, Dr. Ross.

Dr. Ross: Good afternoon, everyone.

Mr. McCourt: Okay, great. Well, first I thought we would briefly have each scientist give a short summary of his research and the potential application in airport screening. And we will follow that by questions from the callers and listeners. And that will be facilitated by the NCI operator, who has already given you some instructions, we presume, about how to do that. So keep your questions handy, because we'll just go into that as soon as the introductions are finished. And I am told this works really smoothly, and I am sure it will.

So, let's start with Dr. Donovan. Rick, first, can you provide a snapshot of what airport screening is and the technology is now, and kind of what your research is aimed at doing to change that?

Dr. Donovan: Yeah. I would say, if I were to characterize our current approach to screening -- and, certainly, at most airports, there may be some pockets of experiments
going on -- but most airports, what we do is we really are screening for capabilities. That is what, you know, we go through, we make sure that nobody has any knives or guns or tweezers or hair gel or whatever our current concern is with regard to capabilities. And that is typically a very reactive approach on top of it.

But what we are typically doing is screening for capability. We want to make sure nobody getting onto an airport -- or onto an airplane or going into the secure areas of airports, for that matter, has the capability, by somebody's definition, of doing harm. And the stuff that we're doing at Montana Tech -- and we have partners -- we have business partners and other researchers that we collaborate with here both in the state, as well as national companies -- is we are developing a new approach that really is screening for intent. That is, the idea is to identify, oh, for lack of a better term, abhorrent behavior among people, you know, approaching secure areas or moving around in airports. We're looking for the anomalous activity that is going on at an airport.

Our primary focus, the research that we're doing, is on the technological enablers, if you will. That is kind of a lot of syllabus -- or lots of syllables to basically, you know -- basically, what we're doing, we're focusing on hardware and software. We don't -- the policy decisions, those are for people above our pay grade to decide how to use the technology. But we're working on developing autonomous sensor networks that use artificial intelligence algorithms to classify the activities of people as they are approaching airports and as they move about the airport.

Mr. McCourt: Can you give an example of something that would be anomalous behavior or activity you would be looking at for people approaching an airport?

Dr. Donovan: Well, you know, the easiest thing to do is just to think about automobiles, right? So, if I have -- a Toyota Corolla is approaching the airport. And through various sensor techniques, I find out that this is a Toyota Corolla that is coming in at 6,000 pounds, and it is driving erratically around the road -- you know, it is clearly going at a high rate of speed, and it is coming in overweight as per what you would expect for a Toyota Corolla. And so, in that case, that vehicle would be flagged as, say, an alert.
And security personnel would be able to come out and take a closer look at that vehicle.

Mr. McCourt: Okay. All right. Well, let's go on to Dr. Frank, who works on behavioral screening, and have him give a brief synopsis of what he does.

Dr. Frank: Right. What we have been looking at, from the behavioral point of view, is just -- over the years -- is how do people's internal states manifest externally? Included in this is when people are thinking and when they are feeling, and so on and so forth -- in particular, looking at emotional expression over the years. And emotions turn out to be important for a number of reasons, of course. But most of the emotions, the basic emotions that we have -- and these are universal. That is, no matter what culture you are from, you have anger, contempt, disgust, fear, happy, sad, surprised. And you express them similarly. Culture does matter, and the input and the output side here.
But the actual physiological response seems to be universal. And these emotions are often about behavioral intentions -- what you are going to do -- so anger and attack, and fear in terms of flight and so on.

The other area of work that we have been dealing within this sort of emotion has also been looking at deception -- what happens when people lie. How do they look? What sort of behaviors? And clues to people thinking and clues to people feeling turn out to be important in that. And so, for the NSF project, what we have had -- and in a previous NSF grant -- collected various data sets, where we had people lying and telling the truth, and identifying some of the behaviors that may be involved in that and developing automated processes for identifying those behaviors, using various computer vision algorithms.

And this work is done with my colleagues, Dr. Javier Movellan and Dr. Marni Bartlett at UCSD, who are co-PIs on this grant. And they are more the computer science part of the house, and we are the behavioral part. And we think, by linking the behavioral scientists with the computer scientists, we can get them working on systems that actually match what people really do.

And so, we're developing these algorithms that will enable computer vision programs that, in real time, identify when people are having these emotions. And we know what certain parts of these emotions and how they look in deceptive situations to see the extent to which we can identify these hot spots. Now, there is no unique Pinocchio response that we've identified or that anybody has identified. So, you know, it would be a stretch to call these lie catchers, because they are not really catching lies. They are catching emotions. But when they are -- to use Dr. Donovan's terms here -- when they are anomalous, they might be useful in this particular context. And I have had experience in airports, working with various personnel whose job it is to do this, and walk around with them. And I think that this is probably a fairly productive way to go on this.

So that is kind where our efforts are located, is looking at the level of the individual.
Because, at the end of the day, it is -- you know, suitcases don't attack people. It is a person, put the bomb in a suitcase or whatever -- and to look at them from a behavioral point of view, to see if that can facilitate the screening process, because you don't have to... you know, you just have to set up a system that can identify who goes to secondary screenings. You know, you can only do so many people an hour in a secondary screening. And how do you ensure that, if somebody is up to some sort of malfeasant act, that they make their way into that secondary screening?

And we believe behavioral clues can be an important layer -- not the end-all and be-all by any stretch, but just another layer of security that will hopefully facilitate the adequacy and the accuracy of that decision-making, because, as you know, it is the one false negative that will kill you in this process.

Mr. McCourt: Right. Okay. And, Dr. Arun Ross, you work on biometrics, which may not be a term that everyone knows. I didn't before I talked to you. So why don't you tell us what that is and how it fits into the airport screening process.

Dr. Ross: Yes, sure, Richard. So, biometrics is the science of establishing human identity based on the physical or behavioral traits of an individual. So, for example, when I say physical traits, I am referring to fingerprints and face and iris traits of an individual, as opposed to behavioral, which would be gait -- the way a person walks -- or the way a person signs, the way a person types on a laptop keyboard. So those would constitute behavioral traits.

And what we are primarily looking at is, how do you combine several of these traits together in order to generate a decision -- an automated decision --about the identity of an individual? So, for example, let's assume that a person is approaching a kiosk, which is biometrically enabled, and that would mean that the kiosk can record the biometric traits of a person. So it would have cameras installed, contact sensors installed. And so, when the person approaches this particular kiosk, the camera could be activated. And then it could measure the height of the individual, which is not a unique trait, but, nevertheless, is a trait that can be used in conjunction with the other primary traits that I alluded to earlier on, in order to facilitate the final decision-making process. And so, once the person approaches the kiosk, then the camera has already captured the face image of the individual, and if there are some high-resolution cameras with appropriate lighting, then the iris information, embedded in the eye, is also acquired.

And so now the question is, how do you combine all of the these evidences to generate a decision? And this is where the field of multibiometrics or multimodal biometrics comes into play. And we call this biometric fusion. And the goal of biometric fusion is reconcile these various traits that are being presented to the machine at different points of time -- not necessarily simultaneously, but, in some cases, incrementally. And so you make an incremental decision as to what the identity of the individual is.

Now, there are several other issues related to this identity-generation process. One, a person could try to circumvent the system. Example would be, a person who has a fake finger, which has a fingerprint inscribed on it. So, how do you detect if the object that is interacting with the sensor is a live finger or if it is a physical artifact? And so, this is the where liveness detection schemes come into play.

And as part of the NSF ITR grant, we are examining ways to determine if the object is live or if it is a generated artifact... or looking at the iris, how it dilates and to understand if that is a real iris or if a contact lens has been imprinted with some false information in order to be perceived to be someone else's iris. So, the liveness detection is another important aspect that some of my co-researchers are working on as part of this particular grant.

And then, finally, the whole process itself... so, let's assume that this particular setup that I just described is installed in an airport. There are several issues that one needs to consider besides just the false accept rate or the false reject rate. The false accept rate is when an imposter is accepted into the system as being a legitimate user, when he is not. And a false reject is when a person with the genuine credentials is turned down by the system because of an error in the system.

And so, how do you balance these errors on a particular day when there are five flights landing, and you have a rush of people at the border entry or the border control system? And so this evokes the whole theory of -- the whole queuing theory, where
one needs to balance these errors with practical considerations. And one practical consideration would be the throughput that is determined on a particular day or the terror alert level that is indicated on a particular day. So, bringing all these things into a single framework is an essential part of this NSF ITR grant.

Mr. McCourt: Okay. Thank you very much. Susan, we're ready to take callers' questions, if they have any right now.

Susan: Thank you. We will now begin the question and answer session. If you would like to ask a question, please press star, then one. You will be prompted to record your name. To withdraw your request, press star, then two. Once again, if you do have a question, please press star, then 1 now. One moment, please, while we wait for the first question.

Mr. McCourt: Susan, may I ask a question while we're waiting?

Susan: Certainly. Mr. McCourt: I am interested to know, something that Dr. Ross talked about at the end there, with false rejects and false acceptances and things like that, something that strikes me is, what is the human machine interface like here? What is happening to... how much do machines control this and how much is human intervention controlling things? Do you just get picked out by a robot and set aside in a room or something? I mean, it sounds kind of onerous in some ways. But what is the -- well, maybe, Rick, you could talk first -- what are the human interactions here? What are the human uses -- or uses for humans in your system?

Dr. Donovan: For our part, the HMI is really pretty basic. What we want is we want to have a system that your typical airport security guard wants to use. If they don't use it, it is not much... well, it is of no use, quite literally.

And so, from an HMI standpoint, all we're doing... our goal, with our system is just to process alarms. And so the only time a typical security guard would actually interact with the sensor network and the associated artificial intelligence classification would be in those instances when there was an actual alarm. And they would be notified via PDA that, here is the place you need to go and here is the thing you need to look for, and here are the procedures that we have determined that you are use when you go and intercept this vehicle or intercept this person or just go to observe some event at an airport. And so our goal is to... is really to minimize the amount of human/machine interface at all, just in order to make the system more user-friendly.

The big problem that we're really looking at is we're really focusing on smaller regional airports where... you know, I live in Butte, Montana. And we have 10 flights a day come in. And so there is a whole lot of time during the day where the security guards are just not doing much at all. Usually, they are shoveling the walks, because they also double as service people. And so, you know, having somebody like that sit around, watching video cameras all day or listening to acoustic sensors or things of that nature is very useless at the end of the day, because they get bored. The machines, however, never get bored. They always listen, the video cameras are always looking, and the classification algorithms are also classifying. And so we're just looking really to minimize the HMI as much as possible.

Mr. McCourt: Right.

Dr. Ross: This is Arun. In the context of biometrics, there is usually a mixture of what we call automated as well as semiautomated systems. Because, when a traveler is interacting with the officer at the border control, there is some level of cooperation from the part of the traveler. And so semiautomated decision-making is possible, in which the machine could generate an assessment of the identity, which could then be complemented by the officer who is monitoring the particular border control entry point.

Dr. Frank: It is the amount, too. This is Mark Frank in Buffalo. But, you know, what is that optimal amount of machine and human in the decision loop? Because, we can teach humans to do these thing that the machines can do, but you run into some of the issue that Rick raised about that. And so how do you best do that? Does it just simply flag it for them? Is it good enough? Do they need to take a more active role?

There is a whole human factors problem that has got to be worked out with this, too, to figure out, what is the optimal solution in terms of the amount of technology verses the amount of human in the decision loop?

Mr. McCourt: What strikes me, you, really, are really looking needles in haystacks, and so it is a difficult thing.

Dr. Frank: At the base rate, think of this in terms of terrorism. In 2001, there were 29 million people who went through Newark Airport. We know of six who were terrorists.
So, even in the course of an hour, there was six out of 5,000 an hour. So the base rate is really quite low.

Mr. McCourt: Right. And, Susan, if anybody calls with a question, feel free to interrupt me at least, and maybe let the other participants... but I will be happy to be quiet and let other people ask questions.

Susan: Well, we do have a question from Tom Frank of "USA TODAY." Your line is open.

Mr. Frank: The question is for Mr. Donovan. Where is this kind of technology already in use?

Dr. Donovan: Well, we have an experimental test that is currently installed. We have one system that is installed at the Helena Regional Airport Authority in Helena, Montana. And that is a system that is primarily focused on the air operations side of airport operations. And the other system is installed here at Butte at the Bert Mooney Airport. And it is primarily focused on everything outside the air operations area.

Mr. Frank: These are tests?

Dr. Donovan: Yeah. These are test beds. They are experimental systems. Although, at Helena, some of the stuff is being implemented, I think, into their current security plan. I am not sure what the schedule deployment of that is into their official security plan.

Mr. Frank: I asked, because I have heard of these kind of systems installed elsewhere.
I don't know if you know where, places in the country, that these kind of machines are installed, this kind of experiment is installed?

Dr. Donovan: Well they are doing some of this stuff... you have to be a little careful -- well, not careful -- but people talk about them being installed places. To any knowledge, for the most part, sensor networks that we're talking about here -- autonomous sensor networks that... you know, they are just working all the time, and they only produce information to security personnel when there is an actual alert or an alarm -- are... I don't know of any that are actually functional at airports right now.

Mr. Frank: How about, other than airports?

Dr. Donovan: What is that?

Mr. Frank: Other than airports?

Dr. Donovan: Other than airports? I don't know. You know, private corporations are pretty private about that stuff. And so, I know that, at some military installations, they are doing some of this. The future combat systems, with the Department of Defense, they have deployed some autonomous sensor networks in some situations there.

Mr. McCourt: Okay. Great. Thanks. Susan, is anyone else waiting?

Susan: Cal Biesecker, "Defense Daily," you may ask your question.

Mr. Biesecker: Can you guys hear me?

Dr. Donovan: Yes.

Mr. Biesecker: Okay, good. Hey, I'll start with Dr. Donovan. Just on that... you mention that you have got an experimental system in Helena and, I guess, at least also in Butte. What... where do you stand, as far as false alarms rates for this kind of thing? Because that seems, to me, it would be a very tough nut to crack, if not, what is anomalous behavior, because I would assume a lot of this depends on just how good the video analytics and software are today. And a lot in the industry would say, it is just... you know, it is evolving, it is getting better. But it sounds to me your type of system would still be years away.

Dr. Donovan: Well, you know, the type of stuff that we're doing, particularly at here at the Butte Airport, we are just here in a month, maybe two months, we're going to be going through the official kind of experimental test and evaluation, where we are calculating false alarm rates and doing the hard core science on how the system is performing. But with the type of stuff that we're looking at, our systems are designed to get better and better the more they are operated.

We have various... we use kind of -- actually, we use a very unique form of what are called artificial neural networks. And not only do our artificial neural networks, not only are they classifying continuously -- data comes into the system, and it is essentially automatically classified. It may not be classified according to some policy that it is established at the airport, with regard to alarms, but it is always looking at the data and also categorizing the data as being... you know, two images are similar or they are dissimilar. And so it is constantly doing classification. And, subsequently, it is constantly learning what its environment is. And so the more these types of systems operate, the better they get at understanding what their normal behavior is.

The first and probably most important step in accurate classification, if you want to do it quickly, is to decide what it is that you want to ignore. And you and I do this all the time.
You know, we walk into a crowd... and just imagine you are in a crowded room, and you are looking for your spouse, you are looking for your wife. And the first thing you start doing is ignoring all the men. And you just don't pay any attention to them hardly at all.
And so... and that is what you want to do in order to reduce false alarms. And so, what are our actual false alarm rates? I don't have a number to give you, because we haven't gone through the hard core evaluation on that.

Mr. Biesecker: Okay.

Dr. Donovan: But I do know that, you know, based on laboratory experiments, that these types of systems produce very small false alarm rates. But one of our -- Dr. Frank, I think, brought this up -- is that, hey, 29 million people went through the airport on September 11th. Six of them were bad guys. And so, hat is an acceptable false alarm rate? In that case, a not-acceptable rate would be would be something worse than six out of 29 million.

Mr. Biesecker: Okay. Well, on the... I wanted to ask Dr. Ross a question. But just real quickly, is there any company or companies in this, you know, either the analytics field or... that you are working with that you can name?

Dr. Donovan: Yeah. One company, they would be Imagination Engines out of St. Louis, Missouri. We use their approach to artificial neural networks, called a Self-Training
Autonomous Neural Network Object -- a STANNO.

Mr. Biesecker: Okay.

Dr. Donovan: And Dr. Thaler is the founder of that company, and he has got a pretty unique approach to all this stuff. It very closely mimics the types of things that you and I do every day to decide whether that box that we're reaching for is Cheerios or Special K.

Mr. Biesecker: Okay.

Dr. Donovan: And we also work with a company out of Arlington, Virginia, BBN Technologies.

Mr. Biesecker: Okay.

Dr. Donovan: Which is... they are actually a pretty sizable defense contractor.
They do a lot of work for DARPA and stuff.

Mr. Biesecker: If I may, this is for Dr. Ross. Could you... what is the current state, if you will, of -- when I say active, how good are current multimodal biometric systems?
For example, L-1 Identity Solutions formed with a merger of Viisage and Identix.
They have a device that does face, iris and finger that they are selling to the Marine Corps, or at least the military. It is being used in Iraq and what have you for both capture and verification purposes. I really don't know how effective it is, other than it seems to be of great interest to the military. And so, I am just sort of curious, again, your opinion on this state of current technology.

Dr. Ross: Well, the performance of the multibiometric systems, as assessed by the state of false accept or the false reject rates, are expected to be rather good. And the reason why I say that very generically is because, you know, depending upon the database on which the experiments are conducted, the results could vary. However, I would like to point out multibiometric systems go beyond just improving the performance.

Some of the advantages are... one would be, the nonuniversal coverage. For example, some individuals just cannot enroll into a fingerprint system, because the quality of the ridges are such that the sensor may not be able to understand or acquire the ridge information. And so, if it is a single biometric system, then, potentially, this user will be excluded from utilizing the biometric capability of the system. And what multibiometrics does is it accommodates the larger segment of the population by having them provide other biometric traits.

The other advantages of multibiometrics is then, say a person enters this country and we just use fingerprints, and due to an unfortunate turn of events, the person loses his hand, then you have, in some sense, lost the ability to measure the biometric traits of this individual. And this is where having alternate traits, like fingerprints and iris and face, et cetera, kind of helps in making... that rendering that final decision. Also, it is a false tolerance capability. If the fingerprint sensor is not operating due to some sensor malfunction, then you still have the other modalities, the other acquisition systems still operating and providing the system the opportunity to process that data and then render a final decision.

Talking about numbers, well, even the unimodal systems sometimes perform really well.
For example, iris and fingerprints -- the fingerprints are required as part of the US-Visit program. Many of the state-of-the-art fingerprint matches perform really well. But then, as was pointed out earlier by Mark Frank, you know, even if we have a very, very, very low error rate, it could be that we just missed that one person that we are looking for.
So, you know, as technology evolves, we really want to push those numbers closer to zero.

Mr. Biesecker: Okay.

Dr. Donovan: This is Dr. Donovan in Butte. I would add that, you know, there is a number of studies out there, and, I mean, the experiments have been done, and the data has been taken that the multisensor input improves classification in general quite a bit, you know. And so, when you start talking about... okay, so now let's talk about biometrics, and let's bring this in with Dr. Frank's business about being able to detect, you know, emotional states in people, and you bring all that stuff in to some sort of classification scheme that is looking at, how did these people approach the airport and what have they been doing in the airport.

All of these things add up to very high... very low false positives in an overall classification. And so, individually, these systems may perform reasonably well, as far as producing, you know, false positives or false negatives. But once you start adding in multisensor inputs, classification rates become very accurate. And we're seeing that in all kinds of things, particularly in chemical detection, that there are ways of detecting, you know, in parts per trillion, certain types of chemicals and... you know, by using multisensor inputs.

Mr. McCourt: This is Richard McCourt again. Dr. Frank and Dr. Ross, are any of your systems in use in airports now?

Dr. Frank: This is Mark Frank in Buffalo. No. At this point, none of the automated systems are being used. We have heard that some of the behavioral stuff that we have done in the past is being used for some of the behavioral observation techniques that are currently being employed in some places in the airports right now. But that is just human/human. There is no automation there.

Dr. Ross: This is Arun from WVU. Richard, the technology that we have developed right now has not been directly applied to airport security. But as part of US-VISIT program, we have talked to some people at DHS who have referred to the results of our technology.

Dr. Donovan: There are biometric systems being installed at airports, though. In fact, at the Helena Airport, they use... the personnel have something called a DIVA card, which is basically a biometrically enabled I.D. card, RFID card, to identify people in the airport that are supposed to be at the airport, supposed to be in secure areas and stuff like that. But it is not an open system, per se, where you are looking at biometrics of everybody coming into the airport. It's just the personnel.

The goal there, again, is identify people that you can start ignoring right away. I don't... all of these people that are out on the tarmac, all the baggage handlers and stuff, that are supposed to be there. And what you are looking for is something anomalous.

Mr. McCourt: Right. Susan, is anyone else waiting?

Susan: No one is waiting at this time. If you do have a question, please press star, then 1.

Mr. McCourt: Okay. Well, I have a question related to this. Dr. Frank, in the behavioral screening, it seems to me, about 90 percent or more of the people who are in the airport are pretty stressed out to begin with. How do you get your baselines of that kind of thing?

Dr. Frank: You know, that is obviously part of the difficulty in this. And that is the part of the difficulty when you are dealing with the emotions in general, is emotions do not tell you their sources. You know, is the fear you see the fear of the guy afraid of missing his plane or the fear of a guy getting caught bringing drugs in his carry-on luggage or whatever? And so that is what makes it difficult.

But in a process not too dissimilar, theoretically, to what Rick Donovan was talking about with sort of anomalous, is there are certain baseline levels of stress that are going to occur and other sorts of reasons. And then there are some that stand out and are manifested in different ways and through different sort of communication channels. But I think the way the system is set up now, that would simply prompt somebody just to talk to that person. And, usually fairly quickly, you figure out that, oh, you can look at their ticket and realize, yes, they have got 10 minutes to make it to the gate, and they still have to go through security -- the story matches up, or other sorts of business. So, yeah, it is a difficult task.

But there is a sense, you know, based on, at this point -- some of it anecdotal, some of it from research literature -- but based on, for example stuff they have pulled from the Israelis and some of their systems, that there are some of the these things that could make these judgments a little better than chance. And, in some cases, they are even a lot better than chance. But, again, when you are dealing with anecdotal data sometimes, it is hard to really hard to get a good, solid number that you can kind of scientifically stand behind.

Dr. Donovan: This is Dr. Donovan in Butte again. And just to add to that, one of the arguments -- and I kind of get a little crap from people every now and then -- is that, okay, well, who really cares about the Butte Airport? And there is good argument there.
It is a small airport and not exactly critical to the overall transportation network.

But most airports in the United States are the size of Butte Airport or somewhere approximately like that. About 75 percent of all the airports are small airports like what we have here in Montana. And so, you know, talking about systems that continuously learn, one interesting thing to be done at a place like the Butte Airport would be to install, you know, a fairly dense network of video cameras and acoustic sensors, and maybe even infrared sensors, and to train these systems to look for the cues that Dr. Frank is talking about in an environment like the Butte Airport. It is an airport, just like anywhere else. People are coming, and they are freaked out about their airplanes, just like they are in LAX. The difference is, there is not as many people and that we can deploy these systems rather quickly, and start learning a lot about what normal behavior looks like at an airport.

Dr. Frank: This is Mark Frank in Buffalo here. You know, there is a couple interesting things going on here. I agree. I don't think we should dump on the small airports.
The 9-11 hijackers, some of them started in Bangor, Maine. There are other people, too.
If we limit the conversation to just terrorism, that may be a mistake, because there are lots of other sort of nefarious activity that occurs. And people do make these decisions, where to go, where to start -- if I go to a smaller airport, the odds of me smuggling my, you know, $40,000 worth of drug money or bags of cocaine may be greater -- the chances of getting through may be greater.

So they do make these decisions throughout. So I agree. I think, just because it is a small airport, doesn't mean it is not worthwhile. And, in fact, it makes it easier to train the systems and maybe easier down the line, if you want to do a thought experiment, someday, if you want to stop everybody, just to see what your base rate is, to see what you are dealing with. There is a lot to be gained by that.

Dr. Donovan: This is Dr. Donovan again. And even getting into the biometrics and multisensor stuff is that... you know, one of the fascinating things for me about artificial intelligence things that approaches like neural network and fuzzy logic classifiers and support vectors and all this kind of crap that you hear about is their inherent scalability.
That is, if I train a system to operate with the, I don't know, 10 flights a day by 30 people, you know, maybe call it 500 people that go through the Butte Airport on any given day, that system is inherently scaleable to 5,000 or 50,000 people. It is just a matter of providing denser sensor input into it.

And that is what I find to be very interesting from a purely scientific standpoint, is the ability to do calculations at multiple scales and to kind of think of higher order classification, as we go into larger and larger systems.

Dr. Ross: This is Arun Ross. And just to add to what Dr. Donovan was saying with regard to biometrics, the other important aspect is which biometric traits to be used, depending upon the situation. As I pointed out earlier, throughput is an important consideration, especially if it is a busy day on the border. And, at that point of time, there is a network of sensors.

And the other level of decision that has to be made is which of these sensors should be used to acquire the biometric traits? And that is an interesting optimization problem, because not only does it look at the error rates that are acceptable, but also the other ambient factors, like the length of the line and so forth or the, as I said earlier, the terror alert level, et cetera. And all of these are to be factored in when deciding dynamically which sensors will to be used.

Susan: Excuse me. I am sorry to interrupt. We do have some questions. Would you like to take those now?

Mr. McCourt: Sure.

Susan: We have our first question comes from Audrey Dutton, "Newsday." Your line is open.

Ms. Dutton: My question is for Dr. Frank. I am curious if you know how close your system is to reaching the point where it can be used in airports?

Dr. Frank: That is a good question. I am optimistic that it is a lot... if you would have asked me this a week ago, I would say maybe three, four years. Since you are asking me now, I could say maybe two years, maybe a year and a half. It depends on a few things. But the progress in being able to get these computer vision algorithms to... you know, people can now do it in real time. And the computer's accuracy at picking them up is now the equivalent of two independent humans picking it up. So there will always be error between the two independent humans. But using real-time judgments, the computer is as good as a person. And that is pretty impressive.

Mr. McCourt: Okay.

Susan: Our next question comes from Taylor Burton, Montana Tech Newspaper. Your line is open.

Ms. Burton: Hi. My question is that I was wondering that, if somebody is targeted as a potential terrorist, what are some of the legal consequences associated with that? What happens to the person?

Mr. McCourt: Rick? Since you're from Montana, why don't you take that?

Dr. Donovan: Yeah. This is Rick Donovan. It depends what you mean targeted as a potential terrorist. If, for example, I have a multisensor system, and I know something about a person's biometrics in terms of how tall they are, how much they weigh, what their gait looks like, you know, maybe I have some photos that have iris scans, maybe I know something about their fingerprint signatures. The consequences of that are kind of up to the FBI and other people.

If you are talking about, you know, sort of general classifications, I would say, from our standpoint -- I can't speak for the other researchers here -- but, from our standpoint, we simply provide the technological enabling... the enabling technology for classifications.

What people do with that classification beyond my research is really done at pay grades above mine. These are policymakers, airport security personnel, transportation security administration, people at the U.S. attorney's office. These are, you know, all these issues about profiling and things of that nature that... you know, I have my personal opinions on it. But from a researcher's standpoint, I am sort of agnostic as to what
happens at that point.

Ms. Burton: Okay. And then I have one more question.
If a person is classified as a potential terrorist, are they red flagged every time they go to an airport afterwards?

Dr. Donovan: I think so. It depends, again, on what you mean by getting red flagged.
There are terrorist watch lists out there that are maintained by, again, the U.S. attorney's office --

Ms. Burton: Okay, thank you.

Dr. Donovan: -- and some of the other three-lettered agencies out there.

Susan: We have a question Cal Biesecker, "Defense Daily."

Mr. Biesecker: This is a follow-up for Dr. Ross. Hey, just on the... it kind of gets back to that question that "Newsday" asked about how soon Dr. Frank's technology might be available. On the part of your... when you look at the behavioral type of biometrics -- I think that is how you described it -- height, gait, that kind of stuff -- you know, where are we in the development and deployment of that kind of technology? That seems a lot further off than, obviously, the hard stuff that is already being employed today -- the physical, I should say.

Dr. Ross: That is Arun. Yes, that is true. The primary biometric identifiers, like face and fingerprints, have already made quite some inroads into airport security. But the traits that you are referring to, like the height, it is part of surveillance, where you have a camera set up. It could be in an airport aisle or it could be in a subway. And here you are trying to measure what we call the video metrology, the ability to measure humans using these surveillance cameras.

There is some research in this direction. However, integrating it and making it interoperable with the current databases is what would be one of the reasons this technology has not yet made its way into airport security. When we are recording the biographic and biometric traits of an individual, of course there are these other traits that are recorded, too, like the height, et cetera. But then having a video surveillance camera automatically interface with this other database and then make some decisions about the identity of an individual is still some ways off.

Mr. Biesecker: So you are able... I want to make sure understand the answer.
You are able now to determine to determine, say, for example, certain behavioral traits, whether it is... well, let's say height or how a person walks. But it is that integration challenge with databases that are... I know there are those watch lists and what have you now, and how they are accessed. I mean, those are challenging in and of itself. You are saying, adding this new data to those databases, that is the toughest hurdle?

Dr. Ross: Well, even measuring them through video metrology is still a challenging problem. Although, as I said, there has been some research done. In fact, several universities are conducting research on video metrology. So we are able to measure the height of an individual, which is not particularly a behavioral trait. It is just still a physical trait. But the gait -- the way a person walks -- would constitute a behavioral trait. But now the current databases -- for example, the watch list databases that are being maintained -- do not necessarily have information pertaining to the gait of an individual. And so this notion of interoperability -- how do you take these various databases and link them together and identify the common traits that are available in a commensurate form -- has still not been established.

Mr. Biesecker: Okay. Appreciate that.

Dr. Donovan: This is Dr. Donovan in Butte. I'd say, part of the... some of the question of how quickly can some of that stuff be deployed... some of the stuff that we use, particularly the Self-Training Neural Networks, this kind of... this challenge of how to integrate that into the overall classification is... you know, this can be done very quickly.
It is just a matter of, you know, policy decisions on directions that we want to take in the research on this stuff.

There have certainly been laboratory experiments that have been done that... well, for example, at the Butte Airport, we are able to get very good resolution on reasonably priced camera systems to where I am confident that I could measure the inseam of everybody coming into the airport on relatively cheap cameras. And so now you are able to, if you think about the level of resolution that you can get on something like that, I think that we're actually much closer to implementing some of the these things than we're currently being... I mean, I am searching for the words to say here. But there is just not a research direction that is being given, and there is not a big push being made in those directions.

Right now, the big push is, you know, we need to get more explosive-detection systems, and we need to get better at detecting nitrates and things like that -- or nitrites, rather -- coming through airports. And that is where a lot of the research funding is really being focused. And I think that there is an opportunity here, particularly when you start thinking about this notion of screening for intent, to get a quite a bit more bang for our buck by looking at relatively cheap systems that focus on things that Dr. Frank is talking about and that Arun is talking about, with regard to biometrics, in terms of improving classification and getting... you know, getting more out of the money that we're spending on security.

Mr. Biesecker: All right.

Mr. McCourt: We have about five minutes left, I think. Are there other callers, Susan?

Susan: I show no callers at this time.

Mr. McCourt: Well, I have a question. It relates to this implementation business. If you go to the airport now, it looks pretty much like it did about four or five years ago, just maybe a little bit worse in terms of, you know, having to go through hoops, depending on the alert level and so forth. Is there ever any idea that some of this can return us to the good old days, you know, when you kind of go to the airport and didn't have to anticipate huge delays and problems with including a nail file in your purse or bag unexpectedly or something like that?

Dr. Frank: Yeah. This is Mark Frank in Buffalo. I guess I can envision that. I just don't know far off it is going to be. And it is also going to depend a little bit on, what is the next level. I mean, with the most recent case in the U.K. where, you know, by combining a couple of semi-innocuous substances -- you know, 30 percent hydrogen peroxide is not an innocuous substance -- but with a few other things, you know, people are learning clever ways to get through the system. And which each layer you develop, there is something else.

The key of the behavioral scientist, of course, we have been focusing on the fact that central to all of those things -- and this was alluded to or just mentioned directly, by the way, by Rick Donovan earlier -- is that the essential thing is, there is a human here with a particular intent. And that intent is to harm others. And so, as long as we can get a handle on that in some form, we may be... you know, this will certainly help us.
But I don't know if it will ever be perfect. And it just comes down to then the public, through their congresspeople, saying, how much do they really want.

I know I have certainly read things about there are light x-rays that you can
walk through, but then somebody sees you nude. And so the thing is, people are not willing to accept that as a screening thing. Okay. So what will they accept? You know, as scientists, we just try to do, I think, the best things we can and say, here is what we think is capable. But, ultimately, it is going to be the public who makes that decision.

Mr. McCourt: In relation to that, you know, there are two sides to this. One is being more safe and the other is being effective -- or more convenient and more affective, I should say. For effectiveness, is there a equal vulnerability to the processes and systems you are talking about in terms of like to arms race?

I mean, if you start outlawing box cutters, then people bring on explosive shoes. You look for people's shoes, and they start bringing on hydrogen peroxide and so forth. Is this kind of technology susceptible to that kind of advance on the other side of the bad guys?

Dr. Frank: Mark Frank in Buffalo. From a behavioral point of view, we don't know the answer to that question yet. And I know there will be some processes in place to look at that. And I know we have done some of that in our work and so have some others. So, you know, until we know the answer to that question, we just don't know. But it seems, given the track record, that seems to be the case.

Dr. Donovan: This is Rick Donovan in Butte. Without making, you know, any grand predications on this stuff, some of the stuff that we're using, particularly, again, the stuff that we get that we other working with from imagination Engines, this stuff actually has the ability to -- these approaches to artificial intelligence actually have the ability to do that we would call imagine -- hence, the term Imagination Engines.

Once these systems are trained up, there are some little tricks that you can apply to these neural networks to get them to say, okay, this is what you have been trained to look for. These are the constraints, let's say, of the system that you are able to classify.
Now what I would do with this system is I would turn it loose. I would turn off the inputs, and I would kind of have the system sort of dream up, if you will, other possible scenarios that are normally constrained by the physics of an airport, if you will, and say, here is another possible scenario that would be potentially dangerous. And this stuff has been done, for example, new material discoveries.

You can change a neural network to predict the properties of composite materials. And so now what you can do with that same neural network is say, okay, here are some properties that I would like for a composite material.

Please tell me what kind of constituents I would make that I would add together to give me these types of properties. And we can do the same thing at that behavioral level and a multisensor level. And so I would say that there is some really solid potential to have a system that not only improves the classification in terms of what is normal behavior, but also has the ability to predict what sort of abnormal behaviors might present themselves that the policymakers may not have already thought of.

Right now, we are... you know, it's like -- and this is a commonly heard term -- we're always fighting the last war. And, you know, we're always looking for the last bad thing that happened and... which is unfortunate, because, as human beings, we have enormous capability and capacity to literally imagine things that we have never seen before. And we're getting to be... getting very close to having the computational power and the resolution on sensors and sensor networks to where we can start achieving some things that are akin to imagination, and start thinking ahead of maybe the bad guys and getting a little bit ahead of these guys.

Mr. McCourt: Dr. Ross, did you have anything to say?

Dr. Ross: Yes. From a biometric perspective, I mean, the goal is to establish the identity, which is to say if this person is who he or she claims to be. And from that perspective alone, I think, you know, we are heading in the right direction, as far as airport security goes. But then Mark and Rick deal with the other aspect of it, which is, once you establish the person's identity, you would also like to know what he or she is thinking about or what he or she intends to do. And so combining these two into a single framework and then posting it as a solution is what is the most challenging problem right now. And once this integration takes place, it will be wonderful to see biometrics actually being deployed in situations where the intent can also be automatically elicited.

Mr. McCourt: Okay, great. Well, Susan, if nobody is waiting -- is that the case?

Susan: No one is waiting, sir.

Mr. McCourt: Okay. I guess we'll wrap it up then. And thanks to Dr. Donovan, Frank and Ross.

And if anyone is still there, if you want a copy of a transcript of this, it will be available eventually. And you can e-mail Josh Chamot at NSF. That is jchamot@nsf.gov. And also, the recording of the entire session will be on the Internet eventually as well.

So, thanks for participating. And we would like to ask the people who called in, if they want to get back to us with feedback to let us know if it was interesting and useful, please do so. Or if you have further questions, you can also call Josh Chamot at here at NSF.

Thanks very much to all three of you.

Dr. Donovan: Thank you.

Dr. Ross: Thank you.

Dr. Frank: Yeah, I really enjoyed it. It was kind of fun to talk about this.

Mr. McCourt: Great. Okay. Well, thanks a lot.

Dr. Donovan: All right, bye.

Return to the audio.

Email this pagePrint this page
Back to Top of page