"Learning Texture Generators for 3D Shape Collections from Internet Photo Sets"
Rui Yu, Yue Dong, Pieter Peers, and Xin Tong

32nd British Machine Vision Conference, November 2021
Abstract
We present a method for decorating existing 3D shape collections by learning a texture generator from internet photo collections. We condition the StyleGAN texture generation by injecting multiview silhouettes of a 3D shape with SPADE-IN. To bridge the inherent domain gap between the multiview silhouettes from the shape collection and the distribution of silhouettes in the photo collection, we employ a mixture of silhouettes from both collections for training. Furthermore, we do not assume each exemplar in the photo collection is viewed from more than one vantage point, and leverage multiview discriminators to promote semantic view-consistency over the generated textures. We verify the efficacy of our design on three real-world 3D shape collections.


Download
Supplementary Material
Code
Bibtex
@conference{Yu:2021:LTG,
author = {Yu, Rui and Dong, Yue and Peers, Pieter and Tong, Xin},
title = {Learning Texture Generators for 3D Shape Collections from Internet Photo Sets},
month = {November},
year = {2021},
booktitle = {32nd British Machine Vision Conference},
}