"RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination"
Chong Zeng, Yue Dong, Pieter Peers, Hongzhi Wu, and Xing Tong

ACM SIGGRAPH 2025 Conference Proceedings, August 2025
Abstract
We present RenderFormer, a neural rendering pipeline that directly renders an image from a triangle-based representation of a scene with full global illumination effects and that does not require per-scene training or fine-tuning. Instead of taking a physics-centric approach to rendering, we formulate rendering as a sequence-to-sequence transformation where a sequence of tokens representing triangles with reflectance properties is converted to a sequence of output tokens representing small patches of pixels. RenderFormer follows a two stage pipeline: a view-independent stage that models triangle-to-triangle light transport, and a view-dependent stage that transforms a token representing a bundle of rays to the corresponding pixel values guided by the triangle-sequence from the the view-independent stage. Both stages are based on the transformer architecture and are learned with minimal prior constraints. We demonstrate and evaluate RenderFormer on scenes with varying complexity in shape and light transport.


Download
Videos

Supplementary Material
Additional Results [show all] Code
Bibtex
@conference{Zeng:2025:RFT,
author = {Zeng, Chong and Dong, Yue and Peers, Pieter and Wu, Hongzhi and Tong, Xing},
title = {RenderFormer: Transformer-based Neural Rendering of Triangle Meshes with Global Illumination},
month = {August},
year = {2025},
booktitle = {ACM SIGGRAPH 2025 Conference Proceedings},
doi = {https://doi.org/10.1145/3721238.3730595},
}