AI + Quantum Risk
- Katarzyna Celińska

- 11 hours ago
- 2 min read
ISACA recently published “The Promise and Peril of the AI Revolution: Managing Risk”, a perspective on how AI risk is moving from “future concern” to operational reality.
What caught my attention most is the explicit link between AI and the next risk horizon: quantum.
ISACA emphasizes that AI has shifted from “an experimental tool” into enterprise infrastructure embedded across platforms and pipelines, creating new failure modes faster than traditional controls can detect and contain.

AI risk themes
➡️ societal risk (mis/disinformation and deepfakes),
➡️ IP and ownership risk,
➡️ cybersecurity and resiliency impacts,
➡️ weak internal permission structures,
➡️ skill gaps,
➡️ unintended use,
➡️ data integrity/hallucinations,
➡️ and liability.
From a cyber and governance lens, two points stand out:
➡️ Agentic/autonomous AI compresses response time
ISACA notes the emergence of agentic AI threats, where systems can independently plan and execute multi-step cyber operations, compressing detection and response timelines beyond classic security capabilities.
➡️ AI amplifies permission mistakes
Misconfigured access isn’t new, but AI can rapidly infer, summarize, and aggregate sensitive information, expanding the blast radius of permission errors. ISACA points toward identity governance and least privilege as primary controls.
Quantum and AI
AI’s scale and autonomy connects with quantum’s impact on cryptographic trust:
➡️ Quantumcomputing threatens widely used public-key cryptography.
ISACA argues AI intensifies PQC urgency for three reasons:
➡️ Longevity of data used by AI systems,
➡️ Automation at scale,
➡️ Expanded attack surface.
ISACA suggests establishing a Quantum–AI readiness team with representation from cybersecurity, enterprise architecture, risk, legal/ compliance, and AI program leadership.
Interesting perspective, especially in the context of quantum risks, which may be further multiplied by AI through automation (agentic AI as an execution layer). AI is already mainstream, but I’m seeing more and more publications and government documents pushing the PQC topic too, and that’s a good thing. It increases the chance that more organizations will pick it up and start addressing post-quantum risks.
Author: Sebastian Burgemejster



Comments