My name is Dan (Alaina) Wang, and I am a Postdoc at the University of California San Diego, where I am advised by Professor Ravi Ramamoorthi. Previously, I worked at the University of Copenhagen in association with the AI Pioneer Centre, where I was advised by Professor Serge Belongie. I received my Ph.D. degree in Electrical and Computer Engineering at the University of British Columbia, where I was supervised by Professor Z. Jane Wang and Professor Tim Salcudean.

My research lies at the intersection of Computer Graphics, Computer Vision, and Artificial Intelligence, with the overarching goal of advancing AI systems that seamlessly integrate environmental understanding with adaptive interaction to enhance human life. I focus on developing next-generation 3D reconstruction and rendering methods for human characters and complex dynamic scenes. By unifying classical graphics principles with modern deep learning approaches, my work aims to build systems that are not only high-performing, but also interpretable and physically grounded.

Research Vision

By bridging classical computer graphics pipelines with deep learning techniques, I aim to create systems that are high-performing, interpretable, and physically grounded.

  1. Physics-Informed World Simulation: Advance physics-grounded world models that faithfully capture real-world dynamics and enable realistic simulation of object interactions and material behaviors.
  2. Controllable & Expressive Generative Model: Design generative frameworks that balance high-quality synthesis with fine-grained user control, enabling precise editing of geometry, appearance, and semantic attributes.
  3. Memory-Efficient 3D Representation: Develop representations that are simultaneously compact, high-quality, and easy to optimize, enabling efficient storage, transmission, and real-time rendering of complex 3D scenes.
  4. Explainable & Trustworthy AI: Transform opaque neural models into transparent and interpretable systems with explicit semantic structure, ensuring reliable, controllable, and accountable AI-driven decisions.

Selected Awards

Publications

( / )

(* indicates equal contributions)

PhysConvex teaser
PhysConvex: Physics-Informed 3D Dynamic Convex Radiance Fields for Reconstruction and Simulation
Dan Wang, Xinrui Cui, Serge Belongie, Ravi Ramamoorthi
SR teaser
Revisiting the Perception-Distortion Trade-off with Spatial-Semantic Guided Super-Resolution
Dan Wang, Haiyan Sun, Shan Du, Z. Jane Wang, Zhaochong An, Serge Belongie, Xinrui Cui*
Differentiable Light Transport with Gaussian Surfels via Adapted Radiosity for Efficient Relighting and Geometry Reconstruction
Kaiwen Jiang, Jia-Mu Sun, Zilu Li, Dan Wang, Tzu-Mao Li, Ravi Ramamoorthi
Coarse-To-Fine Tensor Trains for Compact Visual Representations
Sebastian Loeschcke, Dan Wang, Christian Leth-Espensen, Serge Belongie, J Michael Kastoryano, Sagie Benaim
InNeRF teaser
InNeRF: Learning Interpretable Radiance Fields for Generalizable 3D Scene Representation and Rendering
Dan Wang, Xinrui Cui
EVolT teaser
Multi-View 3D Reconstruction With Transformers
Dan Wang, Xinrui Cui, Xun Chen, Zhengxia Zou, Tianyang Shi, Septimiu Salcudean, Z. Jane Wang, Rabab Ward
ICCV 2021 Oral Presentation / Project Page
CHAIN teaser
Interpreting Bottom-Up Decision-Making of CNNs via Hierarchical Inference
Dan Wang, Xinrui Cui, Xun Chen, Rabab Ward, Z. Jane Wang
CHIP teaser
CHIP: Channel-Wise Disentangled Interpretation of Deep Convolutional Neural Networks
Xinrui Cui, Dan Wang, Z. Jane Wang

Contact

I welcome research collaborations and academic conversations related to computer vision, graphics, machine learning, neural rendering, and interpretable AI. For research inquiries, please reach out through my academic profile or social links above.