葛晨阳,李慧,虎天亮,等.OLED屏下RGB图像优化算法[J]. 微电子学与计算机,2024,41(3):12-20. doi: 10.19304/J.ISSN1000-7180.2023.0155
引用本文: 葛晨阳,李慧,虎天亮,等.OLED屏下RGB图像优化算法[J]. 微电子学与计算机,2024,41(3):12-20. doi: 10.19304/J.ISSN1000-7180.2023.0155
GE C Y,LI H,HU T L,et al. RGB image optimization under OLED screens[J]. Microelectronics & Computer,2024,41(3):12-20. doi: 10.19304/J.ISSN1000-7180.2023.0155
Citation: GE C Y,LI H,HU T L,et al. RGB image optimization under OLED screens[J]. Microelectronics & Computer,2024,41(3):12-20. doi: 10.19304/J.ISSN1000-7180.2023.0155

OLED屏下RGB图像优化算法

RGB image optimization under OLED screens

  • 摘要: 全面屏的流行对智能手机前置摄像头提出了屏下高质量拍摄的要求。目前用于屏下拍摄方案的有机发光二极管(Organic Light-Emitting Diode, OLED)透明屏存在光衍射、折射等现象,导致拍摄的RGB图像易产生模糊和细节丢失等问题。针对上述问题,提出了一种OLED屏下RGB图像优化算法。针对目前屏下RGB图像优化数据集较少的问题,设计实现了一种基于智能手机的OLED透明屏屏下图像数据采集装置,采集并制作了由 10 000多组典型场景构成的屏下图像数据集。其次,提出了一种基于生成对抗网络(Generative Adversarial Nets, GAN)的屏下RGB图像优化算法,其中生成器采用残差网络学习屏下图像细节信息,所设计的感知损失函数是颜色损失、对抗损失和内容损失三者的结合。实验结果表明,基于主观视觉和峰值信噪比(Peak Signal-to-Noise Ratio, PSNR)、结构相似性(Structural Similarity, SSIM)等定量评价指标,本文算法在自建数据集上的图像优化效果优于当前的DPED等方法的效果。

     

    Abstract: The popularity of full screens puts forward high-quality under-screen shooting requirements for smartphone front cameras. The current Organic Light-Emitting Diode(OLED) transparent screens used in under-screen shooting schemes have suffered the problems such as light diffraction and refraction, resulting in blurred RGB(Red Green Blue)images and loss of details. Aiming at the problems mentioned above, a method to restore the high-quality RGB image under smart phone-based OLED screen has been presented in this paper. Firstly, we design a novel image acquisition device under smart phone-based OLED transparent screen displays. The collected image data set consists of more than 10 000 typical scene pairs, which is substantially useful in the deep learning based under-screen image restoration. Secondly, an image restoration scheme for under smart phone-based OLED screen is presented by using Generative Adversarial Nets (GAN). In particular, the residual networks in the generator are used to learn the details of the under-screen image, and the perceptual loss function includes color loss, confrontation loss and content loss. The experiments have shown that our method has achieved better qualitatively visual scenes and quantitatively evaluation results based on Peak Signal-to-Noise Ratio (PSNR), Structural Similarity (SSIM) and Learned Perceptual Image Patch Similarity (LPIPS). And also, our real-data set is proved to be practicable for addressing real complicated image restoration problem.

     

/

返回文章
返回