Security of AI Systems

Security of AI Systems

Machine learning algorithms are developed for stationary environments. However, intelligent and adaptive adversaries can carefully craft input data to always bypass AI-based cybersecurity systems. Therefore, direct utilization of machine learning algorithms would provide limited benefit in the cyber security domain. In adversarial machine learning, we try to first identify potential vulnerabilities of machine learning algorithms during learning and classification and build attacks that correspond to detected vulnerabilities (anti-forensics). Afterward, we are building countermeasures to improve the security of machine learning algorithms (anti-anti-forensics).

o   AI Vulnerability Assessment & Penetration Testing: assessing robustness and security of cloud-based or on-prem image, video, text, or audio recognition AI systems is a growing challenge. We aim to test security properties of AI systems by building a range of query-based, transfer learning-based and spatial transformation attacks.

o   AI Privacy and Compliance: machine learning technologies are widely used in industries with strict privacy requirements such as healthcare, digital banking, wearables, social media, insurance, and etc. These AI engines are trained with private information and are regularly collecting personally identifiable data. When machine learning algorithms are trained with private data, the resulting engine might leak information about that data through its behavior (i.e., a black-box inference attack) or its architecture (i.e., a white-box attack) –  which is a significant challenge to address.

o   AI Forensics: Although AI-based systems and devices such as AI speakers are relatively new, there are already many cases requiring digital investigation of AI-based systems. These systems are collecting and storing a lot of data ranging from people voice to network traffics and even users’ behavior.  AI-based devices (such as Amazon Echo) found on the crime scene can be an important source of evidence. Moreover, with the wide usage of AI in safety critical systems (such as autonomous vehicles), identifying the source of a failure (i.e. cause of an accident) may require an in-depth investigation of the AI engine. Identifying remnants of attackers’ activities and making sure that AI-based ecosystems are forensically ready are challenges we are addressing in our research.