Comment: The TSA faces ethical limitations in the use of AI.  But work to improve technology must persist

Comment: The TSA faces ethical limitations in the use of AI. But work to improve technology must persist

Artificial intelligence has become a disruptive force in society. Terms such as machine learning, deep learning, and neural networks have become commonplace in mainstream media, sparking visions of innovation that have the potential to change our lives.

At its core, AI attempts to mimic the capabilities of the human brain. Whether it’s computer vision, which focuses on how computers understand the visual world, or natural language processing, which focuses on how computers recognize and interpret written text, The list of possibilities for using AI continues to grow.

Take, for example, aviation security. Many people will pass through security checkpoints at airports when traveling during the holiday season. The Transportation Security Administration will process up to 2.5 million people at airport checkpoints on some of the peak holiday days.

The TSA’s responsibility is to protect the country’s air system from malicious activity. Airport security involves many layers. Screening, for example, uses various technologies to serve multiple purposes, such as validating a person’s identity and detecting anything that poses a threat a traveler may attempt to bring on a flight.

The output of screening devices must be read and interpreted by TSA agents, and humans make mistakes. As such, the TSA is working to use AI to improve the detection process and reduce the impact of human error.

However, the hope of AI in airport security is broader.

Using AI to determine intent from behavior, appearance, and speech could have enormous practical impact and benefits.

AI systems that could measure human intent would simplify airport security operations, effectively reducing the need for threat element detection.

The TSA already does this by offering people access to expedited screening lanes by enrolling them in TSA PreCheck. An AI system that could assess the intent of all travelers would be a quantum leap forward in transforming airport security operations and procedures. With such a system, screening would be limited to a small subset of travellers, with most people passing through security checkpoints with little or no physical screening.

Designing and implementing such an AI system for aviation security presents several challenges. First, creating the models and algorithms that process the data and produce the required information. Another is how AI systems make decisions and the inevitable false alarms and false authorizations that come with them. The most competent and knowledgeable humans make such mistakes. No AI system will be completely error-proof, even if the source of those errors will be the design and implementation of models and algorithms.

A third issue is confidentiality. If an AI system can capture traveler intent, is that a line too far to cross? Would this be classified as an invasion of personal space, even with a positive ending? That’s why the TSA PreCheck program is voluntary, not mandatory: participants must pass a background check to qualify.

Perhaps most importantly, the ethics surrounding the design of AI systems need to be addressed. How an AI system incorporates ethics into its creation and implementation affects how it is received, perceived, and embraced.

This challenge provides perhaps the greatest headwinds for AI progress in our country. This could be a factor in how other countries, which have different ethical standards, could overtake the United States in this area.

Investment in AI continues around the world. The potential competitive advantage offered by AI is enormous. Yet transitions from research lab to practice will remain jerky and uncertain, helping to ensure that progress is measured, methodical and slow. However, the United States must persist in its pursuit, given global competition and the need to keep a foothold in the AI ​​arms race.

We are unlikely to find an AI system in place at airports that measures human intent anytime soon. However, the idea that it can be possible is what makes AI a disruptor and a game-changer that demands everyone’s attention.

Indeed, the genius of AI is out of the bottle, and where it takes us is a story that continues to be written.

Sheldon H. Jacobson is a professor of computer science at the University of Illinois at Urbana-Champaign. He uses his expertise in data and risk-based decision-making to assess and inform public policy. He studied aviation safety for over 25 years.

#Comment #TSA #faces #ethical #limitations #work #improve #technology #persist

Leave a Comment

Your email address will not be published. Required fields are marked *