Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • ČeÅ”tina
  • Deutsch
  • EspaƱol
  • FranƧais
  • GĆ idhlig
  • LatvieÅ”u
  • Magyar
  • Nederlands
  • PortuguĆŖs
  • PortuguĆŖs do Brasil
  • Suomi
  • Svenska
  • TürkƧe
  • ŅšŠ°Š·Š°Ņ›
  • বাংলা
  • ą¤¹ą¤æą¤‚ą¤¦ą„€
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Gori, Pietro"

Now showing 1 - 1 of 1
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Real-time Virtual-Try-On from a Single Example Image through Deep Inverse Graphics and Learned Differentiable Renderers
    (The Eurographics Association and John Wiley & Sons Ltd., 2022) Kips, Robin; Jiang, Ruowei; Ba, Sileye; Duke, Brendan; Perrot, Matthieu; Gori, Pietro; Bloch, Isabelle; Chaine, Raphaƫlle; Kim, Min H.
    Augmented reality applications have rapidly spread across online retail platforms and social media, allowing consumers to virtually try-on a large variety of products, such as makeup, hair dying, or shoes. However, parametrizing a renderer to synthesize realistic images of a given product remains a challenging task that requires expert knowledge. While recent work has introduced neural rendering methods for virtual try-on from example images, current approaches are based on large generative models that cannot be used in real-time on mobile devices. This calls for a hybrid method that combines the advantages of computer graphics and neural rendering approaches. In this paper, we propose a novel framework based on deep learning to build a real-time inverse graphics encoder that learns to map a single example image into the parameter space of a given augmented reality rendering engine. Our method leverages self-supervised learning and does not require labeled training data, which makes it extendable to many virtual try-on applications. Furthermore, most augmented reality renderers are not differentiable in practice due to algorithmic choices or implementation constraints to reach real-time on portable devices. To relax the need for a graphics-based differentiable renderer in inverse graphics problems, we introduce a trainable imitator module. Our imitator is a generative network that learns to accurately reproduce the behavior of a given non-differentiable renderer. We propose a novel rendering sensitivity loss to train the imitator, which ensures that the network learns an accurate and continuous representation for each rendering parameter. Automatically learning a differentiable renderer, as proposed here, could be beneficial for various inverse graphics tasks. Our framework enables novel applications where consumers can virtually try-on a novel unknown product from an inspirational reference image on social media. It can also be used by computer graphics artists to automatically create realistic rendering from a reference product image.

Eurographics Association Ā© 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright Ā© 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback