GHCW: A novel Guarded High-fidelity Compression-based Watermarking scheme for AI model protection and self-recovery


Journal article


Tong Liu, Xiaochen Yuan, Wei Ke, Chan-Tong Lam, Sio-Kei Im, Xiuli Bi
Applied Soft Computing, vol. 183, 2025, p. 113576


Link
Cite

Cite

APA   Click to copy
Liu, T., Yuan, X., Ke, W., Lam, C.-T., Im, S.-K., & Bi, X. (2025). GHCW: A novel Guarded High-fidelity Compression-based Watermarking scheme for AI model protection and self-recovery. Applied Soft Computing, 183, 113576. https://doi.org/10.1016/j.asoc.2025.113576


Chicago/Turabian   Click to copy
Liu, Tong, Xiaochen Yuan, Wei Ke, Chan-Tong Lam, Sio-Kei Im, and Xiuli Bi. “GHCW: A Novel Guarded High-Fidelity Compression-Based Watermarking Scheme for AI Model Protection and Self-Recovery.” Applied Soft Computing 183 (2025): 113576.


MLA   Click to copy
Liu, Tong, et al. “GHCW: A Novel Guarded High-Fidelity Compression-Based Watermarking Scheme for AI Model Protection and Self-Recovery.” Applied Soft Computing, vol. 183, 2025, p. 113576, doi:10.1016/j.asoc.2025.113576.


BibTeX   Click to copy

@article{liu2025a,
  title = {GHCW: A novel Guarded High-fidelity Compression-based Watermarking scheme for AI model protection and self-recovery},
  year = {2025},
  journal = {Applied Soft Computing},
  pages = {113576},
  volume = {183},
  doi = {10.1016/j.asoc.2025.113576},
  author = {Liu, Tong and Yuan, Xiaochen and Ke, Wei and Lam, Chan-Tong and Im, Sio-Kei and Bi, Xiuli}
}

Overview of Proposed GHCW.
Abstract: Artificial Intelligence (AI) models are valuable and frequently face malicious tampering attacks, which require significant data and time to retrain when compromised. To address this issue, we propose a Guarded High-fidelity Compression-based Watermarking (GHCW) scheme for detecting and further recovering the tampered parameters, aiming to protect the model’s functional performance without the necessity for retraining. In GHCW, the watermark consists of an authentication probe for tamper detection, recovery bits, and ciphertext for model recovery. The authentication probe is generated by computing inherent characteristics of the parameters using hash algorithms and eigenvalue calculations. Different from existing works, the recovery bits are produced through a high-fidelity model compression technique, which concentrates key model information into fewer high-priority bits. This design enables the recovery mechanism to remain effective even under large-scale parameter tampering. To protect these bits, they are linearly encrypted with a key, generating a ciphertext that serves as a guard. To the best of our knowledge, we are one of the first to propose and implement this idea in the area of AI model protection. Experimental results demonstrate that GHCW excels in recovering models from tampering, especially large-scale parameter disturbance. Compared to existing methods, GHCW shows superiority in both model recovery and tolerance to tampering rate, tolerating tampering rates up to 70%, while existing methods recover at most 50%.


Tools
Translate to