Expressive + WICED 2025
Permanent URI for this collection
Browse
Browsing Expressive + WICED 2025 by Title
Now showing 1 - 19 of 19
Results Per Page
Sort Options
Item ACM/EG Expressive Symposium 2025 Posters and Demos: Frontmatter(The Eurographics Association, 2025) Berio, Daniel; Bruckert, Alexandre; Berio, Daniel; Bruckert, AlexandreItem ACM/EG Expressive Symposium 2025: Frontmatter(The Eurographics Association, 2025) Catalano, Chiara Eva; Parakkat, Amal Dev; Catalano, Chiara Eva; Parakkat, Amal DevItem A Blender Add-on for 3D Concept Sketching(The Eurographics Association, 2025) Wie, Jiayi; Bousseau, Adrien; Berio, Daniel; Bruckert, AlexandreWe present a Blender add-on that bridges the Grease Pencil drawing interface with a symmetry-driven 3D reconstruction algorithm for concept sketches. On the one hand, Blender's Grease Pencil offers users a convenient interface to draw 2D concept sketches using vector strokes, drawn freehand or using vector primitives (straight lines, Bézier curves). On the other hand, the reconstruction algorithm of Hähnlein et al. [HGSB22] can lift such 2D sketches to 3D using symmetry correspondences. While this reconstruction algorithm was originally developed to be automatic, we show how it can be adapted to allow step-by-step reconstruction of a sketch as it is drawn, and to support user corrections. Finally, we compare the reconstructions obtained with this add-on to the ones produced by a recent generative model trained to produce a 3D shape from a single image.Item Bringing Emotions into the Picture: the CambiaColore Technology for Socio-emotional Learning in Children(The Eurographics Association, 2025) Ceccaldi, Eleonora; Ferrando, Silvia; Corbellini, Nicola; Lepri, Giacomo; Volpe, Gualtiero; Berio, Daniel; Bruckert, AlexandreRecognizing our and other people's emotions and being able to express them are key abilities that need to be nurtured and cherished. Technologies can provide teachers and educators with tools that help them engage children in didactic activities, which can foster the development of their emotional abilities. These technologies can also help children find outlets for their emotions, for instance through drawing or expressing emotions with their bodies. This work presents CambiaColore, a novel technology for emotional expression in children. The technology is grounded in the link between drawing and emotion, which is particularly powerful in children and aims to provide teachers with a novel engaging tool for educational, socio-emotional learning activities. Moreover, the technology aims to help children better grasp the interplay between individual and group emotions. The system was co-designed with teachers and educators, who provided incredibly valuable insight regarding its usability and target users. Here, we describe the theoretical premises of our system and its set-up and game-play and present future work directions.Item DepthLight: a Single Image Lighting Pipeline for Seamless Integration of Virtual Objects into Real Scenes(The Eurographics Association, 2025) Manus, Raphael; Christie, Marc; Boivin, Samuel; Guehl, Pascal; Catalano, Chiara Eva; Parakkat, Amal DevWe present DepthLight, a method to estimate spatial lighting for photorealistic Visual Effects (VFX) using a single image as input. Previous techniques rely either on estimated or captured light representations that fail to account for localized lighting effects, or use simplified lights that do not fully capture the complexity of the illumination process. DepthLight addresses these limitations by using a single LDR image with a limited field of view (LFOV) as an input to compute an emissive texture mesh around the image (a mesh which generates spatial lighting in the scene), producing a simple and lightweight 3D representation for photorealistic object relighting. First, an LDR panorama is generated around the input image using a photorealistic diffusion-based inpainting technique, conditioned on the input image. An LDR to HDR network then reconstructs the full HDR panorama, while an off-the-shelf depth estimation technique generates a mesh representation to finally build a 3D emissive mesh. This emissive mesh approximates the bidirectional light interactions between the scene and the virtual objects that is used to relight virtual objects placed in the scene. We also exploit this mesh to cast shadows from the virtual objects on the emissive mesh, and add these shadows to the original LDR image. This flexible pipeline can be easily integrated into different VFX production workflows. In our experiments, DepthLight shows that virtual objects are seamlessly integrated into real scenes with a visually plausible estimation of the lighting.We compared our results to the ground truth lighting using Unreal Engine, as well as to state-of-the-art approaches that use pure HDRi lighting techniques (see Figure 1). Finally, we validated our approach conducting a user evaluation over 52 participants as well as a comparison to existing techniques.Item Evaluating Temporal Coherence using a Watercolor Renderer(The Eurographics Association, 2025) Morgan, Ingrid Ellen Carr; Billeter, Markus; Anjyo, Ken; Anjos, Rafael Kuffner dos; Berio, Daniel; Bruckert, AlexandreTemporal coherence is a long-standing issue within Non-Photorealistic Rendering (NPR). The problem has been defined as a trade-off between the three main factors: flatness, temporal continuity and motion coherence. Approaches that improve temporal coherence are applied across different styles within diverse animation contexts. We have implemented a watercolour renderer that supports multiple temporal coherence approaches in a unified system to investigate this trade-off. The approaches are then evaluated against existing work, with consideration of how external factors including animations and textures may influence perceived incoherence.Item Expressive Rendering for 2D Animations of Liquids(The Eurographics Association, 2025) Regla, Rodrigo Stevenson; Rohmer, Damien; Barthe, Loïc; Cani, Marie-Paule; Berio, Daniel; Bruckert, AlexandreWe describe a new rendering technique for expressive liquid surface 2D animation that can be used on top of existing particlebased simulations. We introduce a hybrid particle model that carries both water and air density distribution and can evolve through particle history. These material quantities combined with the kinematics information are then used to generate a scalar field, which can be parameterized to create an implicit iso-surface capturing stylized geometry commonly seen in paintings and cartoons. We propose, in particular, to represent behavior highlighting the dynamical aspect of the scene, such as elongated droplet behavior and curl-like shapes found in breaking waves.Item LABOR: Production of a Large-scale Painting with a Robot(The Eurographics Association, 2025) Grayver, Liat; Berio, Daniel; Herrmann, Inge; Notz, Adrian; Berio, Daniel; Bruckert, AlexandreLabor is a live human/robot painting installation that combines generative graphics techniques, robotic automation and traditional painting methods. It explores the role of embodied intelligence in artistic production. The resulting composition is a large-scale painting consisting of multiple individually painted tiles. The painting is based on an electron microscope image of a placenta, which is algorithmically processed into a series of parametric brushstrokes using a differentiable vector graphics pipeline. These strokes are then collaboratively painted using a 7-axis robotic arm equipped with custom paintbrushes. The project engages with the dual meaning of ''labor'': industrial production and childbirth, highlighting the often-overlooked importance of bodily knowledge in the arts but particularly in medical and technological contexts. The installation explores the balance between human intuition and algorithmic automation, emphasizing the importance of material constraints and the role of human artists in the creation a large-scale generative painting.Item Modeling Crochet Patterns with a Force-directed Graph Layout(The Eurographics Association, 2025) Greer, Émile; Mould, David; Catalano, Chiara Eva; Parakkat, Amal DevDesigning crochet patterns is a difficult, time-consuming task. Typically, an initial pattern is created and crocheted; after seeing how the object comes out, the pattern is modified and some amount of stitches are undone and remade, through some number of iterations. This process involves a lot of guesswork and the manual labor of physically crocheting. In this paper, we present a way of creating a 3D representation of a crochet pattern using a written pattern as input: we translate the written pattern into a graph and obtain a force-directed graph layout. The result is a 3D model that looks like the hand-crocheted pattern in shape and size, with the advantage that the designer does not need to physically crochet the pattern and can make adjustments based on the digital model. Our intended audience includes both professional designers as well as beginners, helping designers visualize their crochet pattern before investing the time and effort to physically make it. While our application is oriented towards amigurumi, it could be extended to work with clothing or other similar styles of crochet.Item MVAE : Motion-conditioned Variational Auto-Encoder for tailoring character animations(The Eurographics Association, 2025) Bordier, Jean-Baptiste; Christie, Marc; Catalano, Chiara Eva; Parakkat, Amal DevThe design of character animations with enough diversity is a time-consuming task in many productions such as video games or animated films, and drives the need for more simple and effective authoring systems. This paper introduces a novel approach, a motion-conditioned variational autoencoder (VAE) with Virtual reality as a motion capture device. Our model generates diverse humanoid character animations only based on a gesture captured from two Virtual reality controllers, allowing for precise control of motion characteristics such as rhythm, speed and amplitude, and providing variability through noise sampling. From a dataset comprising paired controller-character motions, we design and train our VAE to (i) identify global motion characteristics from the movement, in order to discern the type of animation desired by the user, and (ii) identify local motion characteristics including rhythm, velocity, and amplitude to adapt the animation to these characteristics. Unlike many text-tomotion approaches, our method faces the challenge of interpreting high-dimensional, non-discrete user inputs. Our model maps these inputs into the higher-dimensional space of character animation while leveraging motion characteristics (such as height, speed, walking step frequency, and amplitude) to fine-tune the generated motion. We demonstrate the relevance of the approach on a number of examples and illustrate how changes in rhythm and amplitude of the input motions are transferred to coherent changes in the animated character, while offering a diversity of results using different noise samples.Item Perception of Drawing Reference Quality among Professional Hand-drawn Animators(The Eurographics Association, 2025) Schwartz, Rachael; Mullery, Mark; Dingliana, John; McDonnell, Rachel; Berio, Daniel; Bruckert, AlexandreWe present a preliminary experiment, investigating professional hand-drawn animators' perception of how good one frame is as drawing reference for another. 10 professional hand-drawn animators rated the drawing reference quality of 54 hand-drawn frame pairs, each differing by character pose and region rotation, reflection, and distortion transformations. Our results indicate that animators perceive frames differing by rotation/reflection as better drawing reference than frames differing by distortion.Item PerceptualLift: Using hatches to infer a 3D organic shape from a sketch(The Eurographics Association, 2025) Butler, Tara; Guehl, Pascal; Parakkat, Amal Dev; Cani, Marie-Paule; Catalano, Chiara Eva; Parakkat, Amal DevIn this work, we investigate whether artistic hatching, popular in pen-and-ink sketches, can be consistently perceived as a depth cue. We illustrate our results by presenting PerceptualLift, a modeling system that exploits hatching to create curved 3D shapes from a single sketch. We first describe a perceptual user study conducted across a diverse group of participants, which confirms the relevance of hatches as consistent clues for inferring curvature in the depth direction from a sketch. It enables us to extract geometrical rules that link 2D hatch characteristics, such as their direction, frequency, and magnitude, to the changes of depth in the depicted 3D shape. Built on these rules, we introduce PerceptualLift, a flexible tool to model 3D organic shapes by simply hatching over 2D hand-drawn contour sketches.Item Revisiting Analog Stereoscopic Film(The Eurographics Association, 2025) Freude, Christian; Jauernik, Christina; Lurf, Johann; Suppin, Rüdiger; Wimmer, Michael; Catalano, Chiara Eva; Parakkat, Amal DevWe present approaches for the simulation of an analog autostereoscopic (glasses-free) display and the visualization of analog color film at micro scales. These techniques were developed during an artistic research project and the creation of an accompanying art installation, which exhibits an analog stereo short film projected on a re-creation of a cyclostéréoscope, a historic device developed around 1952. We describe how computer graphics helped to understand the cyclostéréoscope, supported its physical re-creation, and enabled the visualization of the projection and material structure of analog film using physically based Monte Carlo light simulation.Item Robotic Painting using Semantic Image Abstraction(The Eurographics Association, 2025) Stroh, Michael; Paetzold, Patrick; Berio, Daniel; Leymarie, Frederic Fol; Kehlbeck, Rebecca; Deussen, Oliver; Berio, Daniel; Bruckert, AlexandreWe present a novel image segmentation and abstraction pipeline tailored to robot painting applications. We address the unique challenges of realizing digital abstractions as physical artistic renderings. Our approach generates adaptive, semantics-based abstractions that balance aesthetic appeal, structural coherence, and practical constraints inherent to robotic systems. By integrating panoptic segmentation with color-based over-segmentation, we partition images into meaningful regions corresponding to semantic objects while providing customizable abstraction levels we optimize for robotic realization. We employ saliency maps and color difference metrics to support automatic parameter selection to guide a merging process that detects and preserves critical object boundaries while simplifying less salient areas. Graph-based community detection further refines the abstraction by grouping regions based on local connectivity and semantic coherence. These abstractions enable robotic systems to create paintings on real canvases with a controlled level of detail and abstraction.Item Sensing the Invisible: Breathing Life into the Universe(The Eurographics Association, 2025) Cheng, Qifeng; Seaman, William; Berio, Daniel; Bruckert, AlexandreIn an era where machines seemingly ''think'' like us, what truly sets us apart? Sensing the Invisible - Breathing Life into the Universe is an interactive installation that explores the dialogue between human sensory experiences-exemplified by the subtle act of breathing-and the responses of machine sensors. By employing a microphone, humidity sensor, and air pressure sensor, this artwork translates the invisible act of breathing into dynamic visual expressions that evoke celestial concepts, reminiscent of how Earth's most powerful sensors-telescopes-capture the cosmos. It seeks to understand how machine sensors communicate and interact with us across different frequencies and scales, extending beyond the limits of our own sensory systems. This paper describes the conceptual framework, design, technical implementation, and audience interactions that underscore a new language of human-machine communication.Item Sketching Interactive Experiences: Can Co-creation with Artificial Generative Systems Enhance the Communication of Cultural Heritage?(The Eurographics Association, 2025) Veggi, Manuele; Catalano, Chiara Eva; Pescarin, Sofia; Catalano, Chiara Eva; Parakkat, Amal DevGenerative AI has opened up new and largely unexplored opportunities to address the challenges of communicating cultural heritage through interactive experiences. This study explores the potential of text-to-image models to support the early stages of interactive media design for cultural heritage applications. We conducted a qualitative survey using four generative AI services (Stable Diffusion, Adobe Firefly, MidJourney, and DALL-E) to address a real design challenge in the cultural domain. While the study does not cover the full scope of traditional interactive media design workflows or provide a comprehensive performance evaluation, it highlights key benefits of generative AI in early design phases. The survey reveals that these systems boost creativity by introducing unexpected elements, they have the ability to help consolidate initial ideas and communicate them effectively to colleagues or stakeholders, and even help less experienced designers understand design requirements.Item Towards Automated 2D Character Animation(The Eurographics Association, 2025) Mailee, Hamila; Anjos, Rafael Kuffner dos; Berio, Daniel; Bruckert, AlexandreAutomating facial expression changes in comics and 2D animation presents several challenges, as facial structures can vary widely, and audiences are susceptible to the subtlest changes. Building on extensive research in human face image manipulation, landmark-guided image editing offers a promising solution, providing precise control and yielding satisfactory results. This study addresses the challenges hindering the advancement of landmark-based methods for cartoon characters and proposes the use of object detection models -specifically YOLOX and Faster R-CNN- to detect initial facial regions. These detections serve as a foundation for expanding landmark annotations, enabling more effective expression manipulation to animate expressive characters. The codes and trained models are publicly available here.Item View-Dependent Deformation Fields for 2D Editing of 3D Models(The Eurographics Association, 2025) Mqirmi, Martin El; Aigerman, Noam; Catalano, Chiara Eva; Parakkat, Amal DevWe propose a method for authoring non-realistic 3D objects (represented as either 3D Gaussian Splats or meshes), that comply with 2D edits from specific viewpoints. Namely, given a 3D object, a user chooses different viewpoints and interactively deforms the object in the 2D image plane of each view. The method then produces a ''deformation field'' - an interpolation between those 2D deformations in a smooth manner as the viewpoint changes. Our core observation is that the 2D deformations do not need to be tied to an underlying object, nor share the same deformation space. We use this observation to devise a method for authoring view-dependent deformations, holding several technical contributions: first, a novel way to compositionality-blend between the 2D deformations after lifting them to 3D - this enables the user to ''stack'' the deformations similarly to layers in an editing software, each deformation operating on the results of the previous; second, a novel method to apply the 3D deformation to 3D Gaussian Splats; third, an approach to author the 2D deformations, by deforming a 2D mesh encapsulating a rendered image of the object. We show the versatility and efficacy of our method by adding cartoonish effects to objects, providing means to modify human characters, fitting 3D models to given 2D sketches and caricatures, resolving occlusions, and recreating classic non-realistic paintings as 3D models.Item Where Are We Now?(The Eurographics Association, 2025) Williams, Peter J.; Wong, Sala; Berio, Daniel; Bruckert, AlexandreWhere Are We Now? is an interactive physical computing artwork that contrasts moments throughout a recent period of intense changes in Hong Kong. Recalling the use of weaving in early computing and image creation, it is a mechanical display made up of urban landscape images on looped, woven ribbons that expressively rub up against one another. Pulling, snagging and fraying, their tension brings out complications in daily life amid societal shifts. This difficult movement expresses the complex and conflicted nature of Hong Kong's recent history, specifically between 2018 and 2024, the years when the collaged 360- degree/panoramic urban landscape photographs which are printed onto the ribbons were made.