Repository logo
  • Communities & Collections
  • All of DSpace
  • English
  • ÄŒeÅ¡tina
  • Deutsch
  • Español
  • Français
  • Gàidhlig
  • LatvieÅ¡u
  • Magyar
  • Nederlands
  • Português
  • Português do Brasil
  • Suomi
  • Svenska
  • Türkçe
  • Қазақ
  • বাংলা
  • हिंदी
  • Ελληνικά
  • Log In
    or
    New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Raghuvanshi, Nikunj"

Now showing 1 - 2 of 2
Results Per Page
Sort Options
  • Loading...
    Thumbnail Image
    Item
    Efficient acoustic perception for virtual AI agents
    (ACM, 2021) Chemistruck, Mike; Allen, Andrew; Snyder, John; Raghuvanshi, Nikunj; Narain, Rahul and Neff, Michael and Zordan, Victor
    We model acoustic perception in AI agents efficiently within complex scenes with many sound events. The key idea is to employ perceptual parameters that capture how each sound event propagates through the scene to the agent's location. This naturally conforms virtual perception to human. We propose a simplified auditory masking model that limits localization capability in the presence of distracting sounds. We show that anisotropic reflections as well as the initial sound serve as useful localization cues. Our system is simple, fast, and modular and obtains natural results in our tests, letting agents navigate through passageways and portals by sound alone, and anticipate or track occluded but audible targets. Source code is provided.
  • Loading...
    Thumbnail Image
    Item
    Interactive Sound Propagation For Dynamic Scenes Using 2d Wave Simulation
    (The Eurographics Association and John Wiley & Sons Ltd., 2020) Rosen, Matthew; Godin, Keith W.; Raghuvanshi, Nikunj; Bender, Jan and Popa, Tiberiu
    We present a technique to model wave-based sound propagation to complement visual animation in fully dynamic scenes. We employ 2D wave simulation that captures geometry-based diffraction effects such as obstruction, reverberation, and directivity of perceptually-salient initial sound at the source and listener. We show real-time performance on a single CPU core on modestly-sized scenes that are nevertheless topologically complex. Our key ideas are to exploit reciprocity and use a perceptual encoding and rendering framework. These allow the use of low-frequency finite-difference simulations on static scene snapshots. Our results show plausible audio variation that remains robust to motion and geometry changes. We suggest that wave solvers can be a practical approach to real-time dynamic acoustics. We share the complete C++ code of our ''Planeverb'' system.

Eurographics Association © 2013-2025  |  System hosted at Graz University of Technology      
DSpace software copyright © 2002-2025 LYRASIS

  • Cookie settings
  • Privacy policy
  • End User Agreement
  • Send Feedback