40-Issue 7
Permanent URI for this collection
Browse
Browsing 40-Issue 7 by Subject "Image manipulation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Line Art Colorization Based on Explicit Region Segmentation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Cao, Ruizhi; Mo, Haoran; Gao, Chengying; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranAutomatic line art colorization plays an important role in anime and comic industry. While existing methods for line art colorization are able to generate plausible colorized results, they tend to suffer from the color bleeding issue. We introduce an explicit segmentation fusion mechanism to aid colorization frameworks in avoiding color bleeding artifacts. This mechanism is able to provide region segmentation information for the colorization process explicitly so that the colorization model can learn to avoid assigning the same color across regions with different semantics or inconsistent colors inside an individual region. The proposed mechanism is designed in a plug-and-play manner, so it can be applied to a diversity of line art colorization frameworks with various kinds of user guidances. We evaluate this mechanism in tag-based and referencebased line art colorization tasks by incorporating it into the state-of-the-art models. Comparisons with these existing models corroborate the effectiveness of our method which largely alleviates the color bleeding artifacts. The code is available at https://github.com/Ricardo-L-C/ColorizationWithRegion.Item Relighting Humans in the Wild: Monocular Full-Body Human Relighting with Domain Adaptation(The Eurographics Association and John Wiley & Sons Ltd., 2021) Tajima, Daichi; Kanamori, Yoshihiro; Endo, Yuki; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranThe modern supervised approaches for human image relighting rely on training data generated from 3D human models. However, such datasets are often small (e.g., Light Stage data with a small number of individuals) or limited to diffuse materials (e.g., commercial 3D scanned human models). Thus, the human relighting techniques suffer from the poor generalization capability and synthetic-to-real domain gap. In this paper, we propose a two-stage method for single-image human relighting with domain adaptation. In the first stage, we train a neural network for diffuse-only relighting. In the second stage, we train another network for enhancing non-diffuse reflection by learning residuals between real photos and images reconstructed by the diffuse-only network. Thanks to the second stage, we can achieve higher generalization capability against various cloth textures, while reducing the domain gap. Furthermore, to handle input videos, we integrate illumination-aware deep video prior to greatly reduce flickering artifacts even with challenging settings under dynamic illuminations.