Practical Encrypted Machine Learning (PREMAL)
Whenever you enter a prompt into one of the big AI services, the company owning the machine learning models can see your input. This prevents the use of such services if your data is confidential and must not be shared outside your own organisation. The PREMAL project aims to solve this problem using encryption, allowing owners of sensitive data to make use of external AI services while still maintaining full control over their data.
Special cryptographic algorithms, known as homomorphic encryption, have been designed to enable computations on encrypted data. A data owner can use these to encrypt their input before sending it to the company hosting the AI solution. The decryption key is never shared with anyone, ensuring that the AI company cannot learn anything about your data, but is still able to perform the machine learning computations on it.
Methods for doing encrypted machine learning are known, but the homomorphic encryption incurs a high computational cost. The project aims to bring this cost down, without sacrificing the quality of the responses from the AI. Homomorphic encryption can be used for both training models on private data and making inferences on already trained machine learning models. The project’s output will be a thorough understanding of the tradeoff between cost and accuracy in encrypted machine learning and privacy-preserving AI, with a focus on more practical use cases.
Funding
This project is funded through the Research council of Norway's funding scheme “Researcher projects for ICT Renewal and Development” (forskningsradet.no).
