- Quantitative Results (Tab. 4 of the main paper)
QuantDemoire achieves large reductions in parameters and computation while maintaining quality. Meanwhile, it outperforms existing quantization methods by over 4 dB on W4A4.
Demoiréing aims to remove moiré artifacts that often occur in images. While recent deep learning-based methods have achieved promising results, they typically require substantial computational resources, limiting their deployment on edge devices. Model quantization offers a compelling solution. However, directly applying existing quantization methods to demoiréing models introduces severe performance degradation. The main reasons are distribution outliers and weakened representations in smooth regions. To address these issues, we propose QuantDemoire, a post-training quantization framework tailored to demoiréing. It contains two key components. First, we introduce an outlier-aware quantizer to reduce errors from outliers. It uses sampling-based range estimation to reduce activation outliers, and keeps a few extreme weights in FP16 with negligible cost. Second, we design a frequency-aware calibration strategy. It emphasizes low- and mid-frequency components during fine-tuning, which mitigates banding artifacts caused by low-bit quantization. Extensive experiments validate that our QuantDemoire achieves large reductions in parameters and computation while maintaining quality. Meanwhile, it outperforms existing quantization methods by over 4 dB on W4A4.
QuantDemoire consists of two key components: (1) Outlier-aware quantizer: Reduces errors from outliers; (2) Frequency-aware calibration strategy: Emphasizes low- and mid-frequency components during fine-tuning, mitigating banding artifacts caused by low-bit quantization.
QuantDemoire outperforms existing quantization methods across multiple metrics:
@inproceedings{chen2025quantdemoire,
title={QuantDemoire: Quantization with Outlier Aware for Image Demoiréing},
author={Chen, Zheng and Zhang, Kewei and Liu, Xiaoyang and Zhang, Weihang and Wang, Mengfan and Fu, Yifan and Zhang, Yulun},
journal={arXiv preprint arXiv:2510.04066},
year={2025}
}