Neural Shadow Mapping

(In SIGGRAPH '22 Conference Proceedings)

Our hard and soft shadowing method approaches the quality of offline ray tracing whilst striking a favorable position on the performance-accuracy spectrum. On the high-performance end, we produce higher quality results than 𝑛 × 𝑛 Moment Shadow Maps (MSM-𝑛).We require only vanilla shadow mapping inputs to generate visual (and temporal) results that approach ray-traced reference, surpassing more costly denoised interactive ray-traced methods.


We present a neural extension of basic shadow mapping for fast, high quality hard and soft shadows. We compare favorably to fast pre-filtering shadow mapping, all while producing visual results on par with ray traced hard and soft shadows. We show that combining memory bandwidth-aware architecture specialization and careful temporal-window training leads to a fast, compact and easy-to-train neural shadowing method. Our technique is memory bandwidth conscious, eliminates the need for post-process temporal anti-aliasing or denoising, and supports scenes with dynamic view, emitters and geometry while remaining robust to unseen objects.


Pre-recorded presentation


Paper: neuralShadowMapping.pdf (3.5MB)
Supplemental: neuralShadowMappingSupplemental.pdf (2.3MB)
Video results: gDrive (MP4, 902MB)
Bibtex: nsm.bib


We thank the reviewers for their constructive feedback, the ORCA for the Amazon Lumberyard Bistro model, the Stanford CG Lab for the Bunny, Buddha, and Dragon models, Marko Dabrovic for the Sponza model and Morgan McGuire for the Bistro, Conference and Living Room models. This work was done when Sayantan was an intern at Meta Reality Labs Research. While at McGill University, he was also supported by a Ph.D. scholarship from the Fonds de recherche du Québec – nature et technologies.