Browsing by Author "Long, Chengjiang"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item CLA-GAN: A Context and Lightness Aware Generative Adversarial Network for Shadow Removal(The Eurographics Association and John Wiley & Sons Ltd., 2020) Zhang, Ling; Long, Chengjiang; Yan, Qingan; Zhang, Xiaolong; Xiao, Chunxia; Eisemann, Elmar and Jacobson, Alec and Zhang, Fang-LueIn this paper, we propose a novel context and lightness aware Generative Adversarial Network (CLA-GAN) framework for shadow removal, which refines a coarse result to a final shadow removal result in a coarse-to-fine fashion. At the refinement stage, we first obtain a lightness map using an encoder-decoder structure. With the lightness map and the coarse result as the inputs, the following encoder-decoder tries to refine the final result. Specifically, different from current methods restricted pixel-based features from shadow images, we embed a context-aware module into the refinement stage, which exploits patch-based features. The embedded module transfers features from non-shadow regions to shadow regions to ensure the consistency in appearance in the recovered shadow-free images. Since we consider pathces, the module can additionally enhance the spatial association and continuity around neighboring pixels. To make the model pay more attention to shadow regions during training, we use dynamic weights in the loss function. Moreover, we augment the inputs of the discriminator by rotating images in different degrees and use rotation adversarial loss during training, which can make the discriminator more stable and robust. Extensive experiments demonstrate the validity of the components in our CLA-GAN framework. Quantitative evaluation on different shadow datasets clearly shows the advantages of our CLA-GAN over the state-of-the-art methods.Item Luminance Attentive Networks for HDR Image and Panorama Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2021) Yu, Hanning; Liu, Wentao; Long, Chengjiang; Dong, Bo; Zou, Qin; Xiao, Chunxia; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranIt is very challenging to reconstruct a high dynamic range (HDR) from a low dynamic range (LDR) image as an ill-posed problem. This paper proposes a luminance attentive network named LANet for HDR reconstruction from a single LDR image. Our method is based on two fundamental observations: (1) HDR images stored in relative luminance are scale-invariant, which means the HDR images will hold the same information when multiplied by any positive real number. Based on this observation, we propose a novel normalization method called " HDR calibration " for HDR images stored in relative luminance, calibrating HDR images into a similar luminance scale according to the LDR images. (2) The main difference between HDR images and LDR images is in under-/over-exposed areas, especially those highlighted. Following this observation, we propose a luminance attention module with a two-stream structure for LANet to pay more attention to the under-/over-exposed areas. In addition, we propose an extended network called panoLANet for HDR panorama reconstruction from an LDR panorama and build a dualnet structure for panoLANet to solve the distortion problem caused by the equirectangular panorama. Extensive experiments show that our proposed approach LANet can reconstruct visually convincing HDR images and demonstrate its superiority over state-of-the-art approaches in terms of all metrics in inverse tone mapping. The image-based lighting application with our proposed panoLANet also demonstrates that our method can simulate natural scene lighting using only LDR panorama. Our source code is available at https://github.com/LWT3437/LANet.