3D super-resolution (3DSR) aims to reconstruct high-resolution (HR) 3D scenes from low-resolution (LR) multi-view images. Existing methods rely on dense LR inputs and per-scene optimization, which restricts the high-frequency priors for constructing HR 3D Gaussian Splatting (3DGS) to those inherited from pretrained 2D super-resolution (2DSR) models. This severely limits reconstruction fidelity, cross-scene generalization, and real-time usability. We propose to reformulate 3DSR as a direct feed-forward mapping from sparse LR views to HR 3DGS representations, enabling the model to autonomously learn 3D-specific high-frequency geometry and appearance from large-scale, multi-scene data. This fundamentally changes how 3DSR acquires high-frequency knowledge and enables robust generalization to unseen scenes. Specifically, we introduce SR3R, a feed-forward framework that directly predicts HR 3DGS representations from sparse LR views via the learned mapping network. To further enhance reconstruction fidelity, we introduce Gaussian offset learning and feature refinement, which stabilize reconstruction and sharpen high-frequency details. SR3R is plug-and-play and can be paired with any feed-forward 3DGS reconstruction backbone: the backbone provides an LR 3DGS scaffold, and SR3R upscales it to an HR 3DGS. Extensive experiments across three 3D benchmarks demonstrate that SR3R surpasses state-of-the-art (SOTA) 3DSR methods and achieves strong zero-shot generalization, even outperforming SOTA per-scene optimization methods on unseen scenes. Codes will be released upon publication.
Given two LR input views, a feed-forward 3DGS backbone produces an LR 3DGS, which is then densified via Gaussian Shuffle Split to form a structural scaffold. The LR views are upsampled and processed by our mapping network: a ViT encoder with feature refinement integrates LR 3DGS-aware cues, and a ViT decoder performs cross-view fusion. The Gaussian offset learning module then predicts residual offsets to the dense scaffold, yielding the final HR 3DGS for high-fidelity rendering.
@misc{feng2026sr3rrethinkingsuperresolution3d,
title={SR3R: Rethinking Super-Resolution 3D Reconstruction With Feed-Forward Gaussian Splatting},
author={Xiang Feng and Xiangbo Wang and Tieshi Zhong and Chengkai Wang and Yiting Zhao and Tianxiang Xu and Zhenzhong Kuang and Feiwei Qin and Xuefei Yin and Yanming Zhu},
year={2026},
eprint={2602.24020},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2602.24020}
}