"Neural Appearance Modeling from Single Images"
Jay Idema, and Pieter Peers
CoRR, abs/2406.18593,
June 2024
Abstract
We propose a material appearance modeling neural network for visualizing plausible, spatially-varying materials under diverse view and lighting conditions, utilizing only a single photograph of a material under co-located light and view as input for appearance estimation. Our neural architecture is composed of two network stages: a network that infers learned per-pixel neural parameters of a material from a single input photograph, and a network that renders the material utilizing these neural parameters, similar to a BRDF. We train our model on a set of 312,165 synthetic spatially-varying exemplars. Since our method infers learned neural parameters rather than analytical BRDF parameters, our method is capable of encoding anisotropic and global illumination (inter-pixel interaction) information into individual pixel parameters. We demonstrate our model's performance compared to prior work and demonstrate the feasibility of the render network as a BRDF by implementing it into the Mitsuba3 rendering engine. Finally, we briefly discuss the capability of neural parameters to encode global illumination information.
Download
Code
Bibtex
@misc{Idema:2024:NAM,
author = {Idema, Jay and Peers, Pieter},
title = {Neural Appearance Modeling from Single Images},
month = {June},
year = {2024},
howpublished = {CoRR, abs/2406.18593},
doi = {https://arxiv.org/abs/2406.18593},
}