Subject Review: AI-Driven Security in Quantum Machine Learning:Vulnerabilities, Threats, and Defenses
DOI:
https://doi.org/10.31695/IJERAT.2025.4.1Keywords:
Adversarial Attacks in QML, Quantum Security Frameworks, Quantum Error Correction (QEC), Quantum Machine Learning (QML), Post-Quantum Cryptography (PQC), Quantum Homomorphic Encryption (QHE)Abstract
Quantum Machine Learning (QML) has advanced significantly thanks to the combination of Quantum Computing (QC) with Artificial Intelligence (AI), hence releasing computational benefits over conventional methods. This synergy does, however, also bring fresh security flaws like adversarial attacks, quantum noise manipulation, and cryptographic weaknesses. This work offers a thorough investigation of QML security along with an examination of its special vulnerabilities resulting from hardware-induced faults, quantum variational circuits, and quantum data encoding. We methodically investigate adversarial attack techniques using the probabilistic character of quantum states including side-channel assaults, quantum noise injection, and algorithmic perturbations. We also assess innovative defensive strategies such differential privacy, quantum adversarial training, quantum error correction (QEC), cryptographic techniques include Quantum Homomorphic Encryption (QHE), We offer a hybrid AI-driven method for protecting QML models against developing threats by linking artificial intelligence and quantum security frameworks. This work emphasizes the importance of developing quantum-safe AI systems and consistent adversarial robustness standards. The results help to advance AI-enhanced quantum security, thereby guaranteeing the future of QML applications is efficient, strong, and resistant to adversarial attack.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Farah Neamah Abbas, Mohanad Ridha Ghanim, Rafal Naser Saleh

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.