EG2024
Permanent URI for this community
Browse
Browsing EG2024 by Issue Date
Now showing 1 - 20 of 59
Results Per Page
Sort Options
Item SPnet: Estimating Garment Sewing Patterns from a Single Image of a Posed User(The Eurographics Association, 2024) Lim, Seungchan; Kim, Sumin; Lee, Sung-Hee; Hu, Ruizhen; Charalambous, PanayiotisThis paper presents a novel method for reconstructing 3D garment models from a single image of a posed user. Previous studies that have primarily focused on accurately reconstructing garment geometries to match the input garment image may often result in unnatural-looking garments when deformed for new poses. To overcome this limitation, our work takes a different approach by inferring the fundamental shape of the garment through sewing patterns from a single image, rather than directly reconstructing 3D garments. Our method consists of two stages. Firstly, given a single image of a posed user, it predicts the garment image worn on a T-pose, representing the baseline form of the garment. Then, it estimates the sewing pattern parameters based on the T-pose garment image. By simulating the stitching and draping of the sewing pattern using physics simulation, we can generate 3D garments that can adaptively deform to arbitrary poses. The effectiveness of our method is validated through ablation studies on the major components and a comparison with other methods.Item Skeleton-Aware Skin Weight Transfer for Helper Joint Rigs(The Eurographics Association, 2024) Cao, Ziyuan; Mukai, Tomohiko; Hu, Ruizhen; Charalambous, PanayiotisWe propose a method to transfer skin weights and helper joints from a reference model to other targets. Our approach uses two types of spatial proximity to find the correspondence between the target vertex and reference mesh regions. The proposed method first generates a guide weight map to establish a relationship between the skin vertices and skeletal joints using a standard skinning technique. The correspondence between the reference and target skins is established using vertex-to-bone projection and bone-to-skin ray-casting using the guide weights. This method enables fully automated and smooth transfer of skin weight between human-like characters bound to helper joint rigs.Item Recent Trends in 3D Reconstruction of General Non-Rigid Scenes(The Eurographics Association and John Wiley & Sons Ltd., 2024) Yunus, Raza; Lenssen, Jan Eric; Niemeyer, Michael; Liao, Yiyi; Rupprecht, Christian; Theobalt, Christian; Pons-Moll, Gerard; Huang, Jia-Bin; Golyanik, Vladislav; Ilg, Eddy; Aristidou, Andreas; Macdonnell, RachelReconstructing models of the real world, including 3D geometry, appearance, and motion of real scenes, is essential for computer graphics and computer vision. It enables the synthesizing of photorealistic novel views, useful for the movie industry and AR/VR applications. It also facilitates the content creation necessary in computer games and AR/VR by avoiding laborious manual design processes. Further, such models are fundamental for intelligent computing systems that need to interpret real-world scenes and actions to act and interact safely with the human world. Notably, the world surrounding us is dynamic, and reconstructing models of dynamic, non-rigidly moving scenes is a severely underconstrained and challenging problem. This state-of-the-art report (STAR) offers the reader a comprehensive summary of state-of-the-art techniques with monocular and multi-view inputs such as data from RGB and RGB-D sensors, among others, conveying an understanding of different approaches, their potential applications, and promising further research directions. The report covers 3D reconstruction of general non-rigid scenes and further addresses the techniques for scene decomposition, editing and controlling, and generalizable and generative modeling. More specifically, we first review the common and fundamental concepts necessary to understand and navigate the field and then discuss the state-of-the-art techniques by reviewing recent approaches that use traditional and machine-learning-based neural representations, including a discussion on the newly enabled applications. The STAR is concluded with a discussion of the remaining limitations and open challenges.Item Cues to fast-forward collaboration: A Survey of Workspace Awareness and Visual Cues in XR Collaborative Systems(The Eurographics Association and John Wiley & Sons Ltd., 2024) Assaf, Rodrigo; Mendes, Daniel; Rodrigues, Rui; Aristidou, Andreas; Macdonnell, RachelCollaboration in extended reality (XR) environments presents complex challenges that revolve around how users perceive the presence, intentions, and actions of their collaborators. This paper delves into the intricate realm of group awareness, focusing specifically on workspace awareness and the innovative visual cues designed to enhance user comprehension. The research begins by identifying a spectrum of collaborative situations drawn from an analysis of XR prototypes in the existing literature. Then, we describe and introduce a novel classification for workspace awareness, along with an exploration of visual cues recently employed in research endeavors. Lastly, we present the key findings and shine a spotlight on promising yet unexplored topics. This work not only serves as a reference for experienced researchers seeking to inform the design of their own collaborative XR applications but also extends a welcoming hand to newcomers in this dynamic field.Item Accurate Boundary Condition for Moving Least Squares Material Point Method using Augmented Grid Points(The Eurographics Association, 2024) Toyota, Riku; Umetani, Nobuyuki; Hu, Ruizhen; Charalambous, PanayiotisThis paper introduces an accurate boundary-handling method for the moving least squares (MLS) material point method (MPM), which is a popular scheme for robustly simulating deformable objects and fluids using a hybrid of particle and grid representations coupled via MLS interpolation. Despite its versatility with different materials, traditional MPM suffers from undesirable artifacts around wall boundaries, for example, particles pass through the walls and accumulate. To address these issues, we present a technique inspired by a line handler for MLS-based image manipulation. Specifically, we augment the grid by adding points along the wall boundary to numerically compute the integration of the MLS weight. These additional points act as background grid points, improving the accuracy of the MLS interpolation around the boundary, albeit with a marginal increase in computational cost. In particular, our technique makes the velocity perpendicular to the wall nearly zero, preventing particles from passing through the wall. We compare the boundary behavior of 2D simulation against that of naïve approach.Item A Fresnel Model for Coated Materials(The Eurographics Association, 2024) Vernooij, Hannes B.; Hu, Ruizhen; Charalambous, PanayiotisWe propose a novel analytical RGB model for rendering coated conductors, which provides improved accuracy of Fresnel reflectance in BRDFs. Our model targets real-time path tracing and approximates the Fresnel reflectance curves with noticeably more accuracy than Schlick's approximation using Lazanyi's error compensation term and the external media adjustment. We propose an analytical function with coefficients fitted to measured spectral datasets describing the complex index of refraction for conductors. We utilize second-order polynomials to fit the model, subsequently compressing the fitted coefficients to optimize memory requirements while maintaining quality. Both quantitative and visual results affirm the efficacy of our model in representing the Fresnel reflectance of the tested conductors.Item A Visual Profiling System for Direct Volume Rendering(The Eurographics Association, 2024) Buelow, Max von; Ströter, Daniel; Rak, Arne; Fellner, Dieter W.; Hu, Ruizhen; Charalambous, PanayiotisDirect Volume Rendering (DVR) is a crucial technique that enables interactive exploration of results from scientific computing or computer graphics. Its applications range from virtual prototyping for product design to computer-aided diagnosis in medicine. Although there are many existing DVR optimizations, they do not provide a thorough analysis of memory-specific hardware behavior. This paper introduces a profiling toolkit that enables the extraction of performance metrics, such as cache hit rates and branching, from a compiled GPU-based DVR application. The metrics are visualized in the image domain to facilitate spatial visual analysis. This paper presents a pipeline that automatically extracts memory traces using binary instrumentation, simulates the GPU memory subsystem, and models DVR-specific functionality within it. The profiler is demonstrated using the Octree-Linear Bounding Volume Hierarchy (OLBVH), and the visualized profiling metrics are explained based on the OLBVH implementation. Our discussion demonstrates that optimizing ray traversal for adaptive sampling, cache usage, branching, and global memory access has the potential to improve performance.Item An Overview of Teaching a Virtual and Augmented Reality Course at Postgraduate Level for Ten Years(The Eurographics Association, 2024) Marques, Bernardo; Santos, Beatriz Sousa; Dias, Paulo; Sousa Santos, Beatriz; Anderson, EikeIn recent years, a multitude of affordable sensors, interaction devices, and displays have entered the market, facilitating the adoption of Virtual and Augmented Reality (VR/AR) in various areas of application. However, the development of such applications demands a solid grasp of the field and specific technical proficiency often missing from existing Computer Science and Engineering education programs. This work describes a post-graduate-level course being taught for the last ten years to several Master's Degree programs, aiming to introduce students to the fundamental principles, methods, and tools of VR/AR. The course's main objective is to equip students with the necessary knowledge to comprehend, create, implement, and assess applications using these technologies. This paper provides insights into the course structure, the key topics covered, assessment, as well as the devices, and infrastructure utilized. It also includes a brief overview of various sample practical projects, along the years. Among other reflections, we argue that teaching this course is challenging due to the fast evolution of the field making updating paramount. This maybe alleviated by motivating students to a research oriented approach, encouraging them to bring their own projects and challenges (e.g. related to their Master dissertations). Finally, future perspectives are outlined.Item 3D Reconstruction from Sketch with Hidden Lines by Two-Branch Diffusion Model(The Eurographics Association, 2024) Fukushima, Yuta; Qi, Anran; Shen, I-Chao; Gryaditskaya, Yulia; Igarashi, Takeo; Hu, Ruizhen; Charalambous, PanayiotisWe present a method for sketch-based modelling of 3D man-made shapes that exploits not only the commonly considered visible surface lines but also the hidden lines typical for technical drawings. Hidden lines are used by artists and designers to communicate holistic shape structure. Given a single viewpoint sketch, leveraging such lines allows us to resolve the ambiguity of the shape's surfaces hidden from the observer. We assume that the separation into visible and hidden lines is given, and focus solely on how to leverage this information. Our strategy is to mingle two distinct diffusion networks: one generates denoized occupancy grid estimates from a visible line image, whilst the other generates occupancy grid estimates based on contextualized hidden lines unveiling the occluded shape structure. We iteratively merge noisy estimates from both models in a reverse diffusion process. Importantly, we demonstrate the importance of what we call a contextualized hidden lines image over just a hidden lines image. Our contextualized hidden lines image contains hidden lines and silhouette lines. Such contextualization allows us to achieve superior performance to a range of alternative configurations and reconstruct hidden holes and hidden surfaces.Item Behavioral Landmarks: Inferring Interactions from Data(The Eurographics Association, 2024) Lemonari, Marilena; Charalambous, Panayiotis; Panayiotou, Andreas; Chrysanthou, Yiorgos; Pettré, Julien; Liu, Lingjie; Averkiou, MelinosWe aim to unravel complex agent-environment interactions from trajectories, by explaining agent paths as combinations of predefined basic behaviors. We detect trajectory points signifying environment-driven behavior changes, ultimately disentangling interactions in space and time; our framework can be used for environment synthesis and authoring, shown by our case studies.Item Approaches to Nurturing Undergraduate Research in the Creative Industries - a UK Multi-Institutional Exploration(The Eurographics Association, 2024) Anderson, Eike Falk; McLoughlin, Leigh; Gingrich, Oliver; Kanellos, Emmanouil; Adzhiev, Valery; Sousa Santos, Beatriz; Anderson, EikeUndergraduate students aspiring to pursue careers in the creative industries, such as animation, video games, and computer art, require the ability to adapt and contribute to emerging and disruptive technologies. The cultivation of research skills fosters this adaptability and innovation, which is why research skills are considered important by employers. Promoting undergraduate research in computer graphics and related techniques is therefore necessary to ensure that students graduate not only with the vocational but also with the advanced research skills desired by the creative industries. This paper describes pedagogical approaches to nurturing undergraduate research across teaching, learning and through extracurricular activities - pioneered at three UK Higher Education Institutions. Providing observations, we are sharing educational strategies - reflecting on pedagogic experiences of supporting undergraduate research projects, many of which are practice-based. With this paper, we aim to contribute to a wider discussion around challenges and opportunities of student-led research.Item Distributed Surface Reconstruction(The Eurographics Association, 2024) Marin, Diana; Komon, Patrick; Ohrhallinger, Stefan; Wimmer, Michael; Liu, Lingjie; Averkiou, MelinosRecent advancements in scanning technologies and their rise in availability have shifted the focus from reconstructing surfaces from point clouds of small areas to large, e.g., city-wide scenes, containing massive amounts of data. We adapt a surface reconstruction method to work in a distributed fashion on a high-performance cluster, reconstructing datasets with millions of vertices in seconds. We exploit the locality of the connectivity required by the reconstruction algorithm to efficiently divide-andconquer the problem of creating triangulations from very large unstructured point clouds.Item An Inverse Procedural Modeling Pipeline for Stylized Brush Stroke Rendering(The Eurographics Association, 2024) Li, Hao; Guan, Zhongyue; Wang, Zeyu; Hu, Ruizhen; Charalambous, PanayiotisStylized brush strokes are crucial for digital artists to create drawings that express a desired artistic style. To obtain the ideal brush, artists need to spend much time manually tuning parameters and creating customized brushes, which hinders the completion, redrawing, or modification of digital drawings. This paper proposes an inverse procedural modeling pipeline for predicting brush parameters and rendering stylized strokes given a single sample drawing. Our pipeline involves patch segmentation as a preprocessing step, parameter prediction based on deep learning, and brush generation using a procedural rendering engine. Our method enhances the overall experience of digital drawing recreation by empowering artists with more intuitive control and consistent brush effects.Item Diffusion Models for Visual Content Generation(The Eurographics Association, 2024) Mitra, Niloy; Mania, Katerina; Artusi, AlessandroDiffusion models are now the state-of-the-art for producing images. These models have been trained on vast datasets and are increasingly repurposed for various image processing and conditional image generation tasks. We expect these models to be widely used in Computer Graphics and related research areas. Image generation has evolved into a rich promise of new possibilities, and in this tutorial, we will guide you through the intricacies of understanding and using diffusion models. This tutorial is targeted towards graphics researchers with an interest in image/video synthesis and manipulation. Attending the tutorial will enable participants to build a working knowledge of the core formulation, understand how to get started in this area, and study practical use cases to explore this new tool. Our goal is to get more researchers with expertise in computer graphics to start exploring the open challenges in this topic and explore innovative use cases in CG contexts in image synthesis and other media formats. From understanding the underlying principles to hands-on implementation, you'll gain practical skills that bridge theory and application. Throughout the tutorial, we'll explore techniques for generating lifelike textures, manipulating details, and achieving remarkable visual effects. By the end, you'll have a solid foundation in utilizing diffusion models for image generation, ready to embark on your creative projects. Join us as we navigate the fascinating intersection of computer graphics and diffusion models, where pixels become canvases and algorithms transform into brushes.Item Topological Data Structure for Computer Graphics(The Eurographics Association, 2024) Fábián, Gábor; Liu, Lingjie; Averkiou, MelinosThis research is motivated by the following well-known contradiction. In computer-aided design or modeling tasks, we generally represent surfaces using edge-based data structures as winged edge [Bau75], half-edge [MP78] [CP98], or quad-edge [GS85]. In contrast, real-time computer graphics represents surfaces with face-vertex meshes, since for surface rendering, there is no need for the explicit representation of edges. In this research we introduce a novel data structure for representation of triangle meshes. Our representation is based on the concept of face-vertex meshes with adjacencies, but we use some extra information and new ideas that greatly simplify the implementation of algorithms.Item Virtual Instrument Performances (VIP): A Comprehensive Review(The Eurographics Association and John Wiley & Sons Ltd., 2024) Kyriakou, Theodoros; Alvarez de la Campa Crespo, Merce; Panayiotou, Andreas; Chrysanthou, Yiorgos; Charalambous, Panayiotis; Aristidou, Andreas; Aristidou, Andreas; Macdonnell, RachelDriven by recent advancements in Extended Reality (XR), the hype around the Metaverse, and real-time computer graphics, the transformation of the performing arts, particularly in digitizing and visualizing musical experiences, is an ever-evolving landscape. This transformation offers significant potential in promoting inclusivity, fostering creativity, and enabling live performances in diverse settings. However, despite its immense potential, the field of Virtual Instrument Performances (VIP) has remained relatively unexplored due to numerous challenges. These challenges arise from the complex and multi-modal nature of musical instrument performances, the need for high precision motion capture under occlusions including the intricate interactions between a musician's body and fingers with instruments, the precise synchronization and seamless integration of various sensory modalities, accommodating variations in musicians' playing styles, facial expressions, and addressing instrumentspecific nuances. This comprehensive survey delves into the intersection of technology, innovation, and artistic expression in the domain of virtual instrument performances. It explores musical performance multi-modal databases and investigates a wide range of data acquisition methods, encompassing diverse motion capture techniques, facial expression recording, and various approaches for capturing audio and MIDI data (Musical Instrument Digital Interface). The survey also explores Music Information Retrieval (MIR) tasks, with a particular emphasis on the Musical Performance Analysis (MPA) field, and offers an overview of various works in the realm of Musical Instrument Performance Synthesis (MIPS), encompassing recent advancements in generative models. The ultimate aim of this survey is to unveil the technological limitations, initiate a dialogue about the current challenges, and propose promising avenues for future research at the intersection of technology and the arts.Item A Survey on Cage-based Deformation of 3D Models(The Eurographics Association and John Wiley & Sons Ltd., 2024) Ströter, Daniel; Thiery, Jean-Marc; Hormann, Kai; Chen, Jiong; Chang, Qingjun; Besler, Sebastian; Mueller-Roemer, Johannes Sebastian; Boubekeur, Tamy; Stork, André; Fellner, Dieter W.; Aristidou, Andreas; Macdonnell, RachelInteractive deformation via control handles is essential in computer graphics for the modeling of 3D geometry. Deformation control structures include lattices for free-form deformation and skeletons for character articulation, but this report focuses on cage-based deformation. Cages for deformation control are coarse polygonal meshes that encase the to-be-deformed geometry, enabling high-resolution deformation. Cage-based deformation enables users to quickly manipulate 3D geometry by deforming the cage. Due to their utility, cage-based deformation techniques increasingly appear in many geometry modeling applications. For this reason, the computer graphics community has invested a great deal of effort in the past decade and beyond into improving automatic cage generation and cage-based deformation. Recent advances have significantly extended the practical capabilities of cage-based deformation methods. As a result, there is a large body of research on cage-based deformation. In this report, we provide a comprehensive overview of the current state of the art in cage-based deformation of 3D geometry. We discuss current methods in terms of deformation quality, practicality, and precomputation demands. In addition, we highlight potential future research directions that overcome current issues and extend the set of practical applications. In conjunction with this survey, we publish an application to unify the most relevant deformation methods. Our report is intended for computer graphics researchers, developers of interactive geometry modeling applications, and 3D modeling and character animation artists.Item EUROGRAPHICS 2024: Education Papers Frontmatter(The Eurographics Association, 2024) Sousa Santos, Beatriz; Anderson, Eike; Sousa Santos, Beatriz; Anderson, EikeItem A Highly Adaptable and Flexible Rendering Engine by Minimum API Bindings(The Eurographics Association, 2024) Kim, Taejoon; Hu, Ruizhen; Charalambous, PanayiotisThis paper presents a method for embedding a rendering engine into different development environments with minimal API bindings. The method separates the engine interfaces into two levels: System APIs and User APIs. System APIs are the lowlevel functions that enable communication between the engine and the user environment, while User APIs are the high-level functions that provide rendering and beyond rendering functionalities to the user. By minimizing the number of System APIs, the method simplifies the adaptation of the engine to various languages and platforms. Its applicability and flexibility are demonstrated by the successful embedding the engine in multiple environments, including C/C++, C#, Python, Javascript, and Matlab. It also demonstrates its versatility in diverse forms such as CLI renderers, Web GUI framework-based renderers, remote renderers, physical simulations, and more, while also enabling the easy adoption of other rendering algorithms to the engine.Item VirtualVoxelCrowd: Rendering One Billion Characters at Real-Time(The Eurographics Association, 2024) Yang, Jinyuan; Campbell, Abraham G.; Liu, Lingjie; Averkiou, MelinosIn this paper, we introduce VirtualVoxelCrowd, which aims to address the challenges of data scale and overdraw in massive crowd rendering applications. The approach leverages multiple levels of detail and multi-pass culling to reduce rendering workload and overdraw. VirtualVoxelCrowd supports rendering of up to one billion characters, achieving unprecedented scale on standard graphics hardware while rendering subpixel-level voxels to prevent the level of detail transition artifacts. This method offers significant improvements in handling massive animated crowd visualization, establishing a new possibility for dynamic, large-scale scene rendering.
- «
 - 1 (current)
 - 2
 - 3
 - »