Modern cameras produce remarkably high-quality images, yet their performance in low-light conditions remains suboptimal due to fundamental limitations in photon shot noise and sensor read noise. While generative image restoration methods have shown promising results compared to traditional approaches, they often suffer from hallucinatory content generation when the signal-to-noise ratio (SNR) is low. Leveraging the availability of personalized photo galleries on users' smartphones, we introduce Diffusion-based Personalized Generative Denoising (DiffPGD), a novel approach that builds a customized diffusion model for individual users. Our key innovation lies in the development of an identity-consistent physical buffer that extracts the physical attributes of the person from the gallery. This ID-consistent physical buffer serves as a robust prior that can be seamlessly integrated into the diffusion model to restore degraded images without the need for fine-tuning. Over a wide range of low-light testing scenarios, we show that DiffPGD achieves superior image denoising and enhancement performance compared to existing diffusion-based denoising approaches.
Click the buttons above to switch between real-life case examples.
@article{wang2024genrestore,
title={Personalized Generative Low-light Image Denoising and Enhancement},
author={Wang, Xijun and Chennuri, Prateek and Yuan, Yu and Ma, Bole and Zhang, Xingguang and Chan, Stanley},
journal={arXiv preprint arXiv},
year={2024}
}