White-box Watermarking-based Model Protection



Background


In modern AI environments, deep learning models have become valuable digital assets that are vulnerable to tampering and malicious parameter manipulation. Once a model is compromised, restoring its integrity usually requires costly and time-consuming retraining. Existing protection methods mainly focus on detecting tampering or embedding fragile watermarks, but they lack effective mechanisms for recovering the original model parameters. Large-scale or subtle perturbations present additional challenges, as traditional watermarking designs often fail under such conditions. Therefore, there is a growing need for a unified and verifiable recovery framework that can both locate  and restore model tampered parameters efficiently, even when a significant portion of the parameters has been altered. 

Challenges


Demo




GHCW: A novel Guarded High-fidelity Compression-based Watermarking scheme for AI model protection and self-recovery


Tong Liu, Xiaochen Yuan, Wei Ke, Chan-Tong Lam, Sio-Kei Im, Xiuli Bi

Applied Soft Computing, vol. 183, 2025, p. 113576


Tools
Translate to