AI Security Hinges On Trust, Research Warns
News Crackers Science & Technology 0
By TOSI ORE
ARTIFICIAL intelligence is increasingly running vital U.S. systems—from power grids to hospitals—but its growth also raises risks of cyberattacks and data breaches, according to new research led by Adetomiwa Adesokan, a doctoral scholar at the University of Nevada, Reno.
In a peer-reviewed study published in the Engineering Science & Technology Journal, Adesokan and colleagues from five universities tested AI models on cybersecurity data. Their Random Forest and Gradient Boosting algorithms achieved flawless accuracy in detecting malicious port-scanning activity, a common hacking tactic.
The findings show AI’s promise in defending critical infrastructure, but Adesokan cautions that real-world complexity demands rigorous testing, robust privacy safeguards, and human oversight.
He advocates a “secure-by-design” approach—encrypting data, ensuring algorithm transparency, and making automated decisions auditable—while stressing the need for national standards, public awareness, and collaboration between government and industry.
“Keeping the lights on in the AI era requires more than smart code,” Adesokan said. “It demands a partnership of technology, policy, and public trust.”