109
SoK: Can Fully Homomorphic Encryption Support General AI Computation? A Functional and Cost Analysis
arXiv:2504.11604v2 Announce Type: replace
Abstract: Artificial intelligence (AI) increasingly powers sensitive applications in domains such as healthcare and finance, relying on both linear operations (e.g., matrix multiplications in large language models) and non-linear operations (e.g., sorting in retrieval-augmented generation). Fully homomorphic encryption (FHE) has emerged as a promising tool for privacy-preserving computation, but it remains unclear whether existing methods can support the full spectrum of AI workloads that combine these operations.
In this SoK, we ask: Can FHE support general AI computation? We provide both a functional analysis and a cost analysis. First, we categorize ten distinct FHE approaches and evaluate their ability to support general computation. We then identify three promising candidates and benchmark workloads that mix linear and non-linear operations across different bit lengths and SIMD parallelization settings. Finally, we evaluate five real-world, privacy-sensitive AI applications that instantiate these workloads. Our results quantify the costs of achieving general computation in FHE and offer practical guidance on selecting FHE methods that best fit specific AI application requirements. Our codes are available at https://github.com/UCF-ML-Research/FHE-AI-Generality.
Abstract: Artificial intelligence (AI) increasingly powers sensitive applications in domains such as healthcare and finance, relying on both linear operations (e.g., matrix multiplications in large language models) and non-linear operations (e.g., sorting in retrieval-augmented generation). Fully homomorphic encryption (FHE) has emerged as a promising tool for privacy-preserving computation, but it remains unclear whether existing methods can support the full spectrum of AI workloads that combine these operations.
In this SoK, we ask: Can FHE support general AI computation? We provide both a functional analysis and a cost analysis. First, we categorize ten distinct FHE approaches and evaluate their ability to support general computation. We then identify three promising candidates and benchmark workloads that mix linear and non-linear operations across different bit lengths and SIMD parallelization settings. Finally, we evaluate five real-world, privacy-sensitive AI applications that instantiate these workloads. Our results quantify the costs of achieving general computation in FHE and offer practical guidance on selecting FHE methods that best fit specific AI application requirements. Our codes are available at https://github.com/UCF-ML-Research/FHE-AI-Generality.