Browsing by Author "Ritchie, Daniel"
Now showing 1 - 5 of 5
Results Per Page
Sort Options
Item Learning Generative Models of 3D Structures(The Eurographics Association and John Wiley & Sons Ltd., 2020) Chaudhuri, Siddhartha; Ritchie, Daniel; Wu, Jiajun; Xu, Kai; Zhang, Hao; Mantiuk, Rafal and Sundstedt, Veronica3D models of objects and scenes are critical to many academic disciplines and industrial applications. Of particular interest is the emerging opportunity for 3D graphics to serve artificial intelligence: computer vision systems can benefit from syntheticallygenerated training data rendered from virtual 3D scenes, and robots can be trained to navigate in and interact with real-world environments by first acquiring skills in simulated ones. One of the most promising ways to achieve this is by learning and applying generative models of 3D content: computer programs that can synthesize new 3D shapes and scenes. To allow users to edit and manipulate the synthesized 3D content to achieve their goals, the generative model should also be structure-aware: it should express 3D shapes and scenes using abstractions that allow manipulation of their high-level structure. This state-of-theart report surveys historical work and recent progress on learning structure-aware generative models of 3D shapes and scenes. We present fundamental representations of 3D shape and scene geometry and structures, describe prominent methodologies including probabilistic models, deep generative models, program synthesis, and neural networks for structured data, and cover many recent methods for structure-aware synthesis of 3D shapes and indoor scenes.Item Learning Generative Models of 3D Structures(The Eurographics Association, 2019) Chaudhuri, Siddhartha; Ritchie, Daniel; Xu, Kai; Zhang, Hao (Richard); Jakob, Wenzel and Puppo, EnricoMany important applications demand 3D content, yet 3D modeling is a notoriously difficult and inaccessible activity. This tutorial provides a crash course in one of the most promising approaches for democratizing 3D modeling: learning generative models of 3D structures. Such generative models typically describe a statistical distribution over a space of possible 3D shapes or 3D scenes, as well as a procedure for sampling new shapes or scenes from the distribution. To be useful by non-experts for design purposes, a generative model must represent 3D content at a high level of abstraction in which the user can express their goals-that is, it must be structure-aware. In this tutorial, we will take a deep dive into the most exciting methods for building generative models of both individual shapes as well as composite scenes, highlighting how standard data-driven methods need to be adapted, or new methods developed, to create models that are both generative and structure-aware. The tutorial assumes knowledge of the fundamentals of computer graphics, linear algebra, and probability, though a quick refresher of important algorithmic ideas from geometric analysis and machine learning is included. Attendees should come away from this tutorial with a broad understanding of the historical and current work in generative 3D modeling, as well as familiarity with the mathematical tools needed to start their own research or product development in this area.Item Learning Style Compatibility Between Objects in a Real-World 3D Asset Database(The Eurographics Association and John Wiley & Sons Ltd., 2019) Liu, Yifan; Tang, Ruolan; Ritchie, Daniel; Lee, Jehee and Theobalt, Christian and Wetzstein, GordonLarge 3D asset databases are critical for designing virtual worlds, and using them effectively requires techniques for efficient querying and navigation. One important form of query is search by style compatibility: given a query object, find others that would be visually compatible if used in the same scene. In this paper, we present a scalable, learning-based approach for solving this problem which is designed for use with real-world 3D asset databases; we conduct experiments on 121 3D asset packages containing around 4000 3D objects from the Unity Asset Store. By leveraging the structure of the object packages, we introduce a technique to synthesize training labels for metric learning that work as well as human labels. These labels can grow exponentially with the number of objects, allowing our approach to scale to large real-world 3D asset databases without the need for expensive human training labels. We use these synthetic training labels in a metric learning model that analyzes the in-engine rendered appearance of an object—-combining geometry, material, and texture-whereas prior work considers only object geometry, or disjoint geometry and texture features. Through an ablation experiment, we find that using this representation yields better results than using renders which lack texture, materiality, or both.Item Neurosymbolic Models for Computer Graphics(The Eurographics Association and John Wiley & Sons Ltd., 2023) Ritchie, Daniel; Guerrero, Paul; Jones, R. Kenny; Mitra, Niloy J.; Schulz, Adriana; Willis, Karl D. D.; Wu, Jiajun; Bousseau, Adrien; Theobalt, ChristianProcedural models (i.e. symbolic programs that output visual data) are a historically-popular method for representing graphics content: vegetation, buildings, textures, etc. They offer many advantages: interpretable design parameters, stochastic variations, high-quality outputs, compact representation, and more. But they also have some limitations, such as the difficulty of authoring a procedural model from scratch. More recently, AI-based methods, and especially neural networks, have become popular for creating graphic content. These techniques allow users to directly specify desired properties of the artifact they want to create (via examples, constraints, or objectives), while a search, optimization, or learning algorithm takes care of the details. However, this ease of use comes at a cost, as it's often hard to interpret or manipulate these representations. In this state-of-the-art report, we summarize research on neurosymbolic models in computer graphics: methods that combine the strengths of both AI and symbolic programs to represent, generate, and manipulate visual data. We survey recent work applying these techniques to represent 2D shapes, 3D shapes, and materials & textures. Along the way, we situate each prior work in a unified design space for neurosymbolic models, which helps reveal underexplored areas and opportunities for future research.Item Roominoes: Generating Novel 3D Floor Plans From Existing 3D Rooms(The Eurographics Association and John Wiley & Sons Ltd., 2021) Wang, Kai; Xu, Xianghao; Lei, Leon; Ling, Selena; Lindsay, Natalie; Chang, Angel Xuan; Savva, Manolis; Ritchie, Daniel; Digne, Julie and Crane, KeenanRealistic 3D indoor scene datasets have enabled significant recent progress in computer vision, scene understanding, autonomous navigation, and 3D reconstruction. But the scale, diversity, and customizability of existing datasets is limited, and it is time-consuming and expensive to scan and annotate more. Fortunately, combinatorics is on our side: there are enough individual rooms in existing 3D scene datasets, if there was but a way to recombine them into new layouts. In this paper, we propose the task of generating novel 3D floor plans from existing 3D rooms. We identify three sub-tasks of this problem: generation of 2D layout, retrieval of compatible 3D rooms, and deformation of 3D rooms to fit the layout. We then discuss different strategies for solving the problem, and design two representative pipelines: one uses available 2D floor plans to guide selection and deformation of 3D rooms; the other learns to retrieve a set of compatible 3D rooms and combine them into novel layouts. We design a set of metrics that evaluate the generated results with respect to each of the three subtasks and show that different methods trade off performance on these subtasks. Finally, we survey downstream tasks that benefit from generated 3D scenes and discuss strategies in selecting the methods most appropriate for the demands of these tasks.