FlowIBR: Leveraging Pre-Training for Efficient Neural Image-Based Rendering of Dynamic Scenes

1KTH Royal Institute of Technology   2Chalmers University of Technology  

Preprint

Abstract

We introduce a novel approach for monocular novel view synthesis of dynamic scenes. Existing techniques already show impressive rendering quality but tend to focus on optimization within a single scene without leveraging prior knowledge. This limitation has been primarily attributed to the lack of datasets of dynamic scenes available for training and the diversity of scene dynamics.

Our method FlowIBR circumvents these issues by integrating a neural image-based rendering method, pre-trained on a large corpus of widely available static scenes, with a per-scene optimized scene flow field. Utilizing this flow field, we bend the camera rays to counteract the scene dynamics, thereby presenting the dynamic scene as if it were static to the rendering network. The proposed method reduces per-scene optimization time by an order of magnitude, achieving comparable results to existing methods — all on a single consumer-grade GPU.

Method

a) An image at an arbitrary position (orange camera) is synthesised based on existing observations (black camera), collected at different times. Problem: Due to the movement of the skater, the skater is not on the epipolar line of the camera ray, which is necessary for image-based rendering. b) We model scene motion using per-scene learned scene flow. c) Scene flow is used to compensate for the motion by adjusting the camera ray. d) GNT, a pre-trained neural IBR method for static scenes is used for image synthesis.

BibTeX

@misc{
  buesching2023flowibr,
  title         = {FlowIBR: Leveraging Pre-Training for Efficient Neural Image-Based Rendering of Dynamic Scenes}, 
  author        = {Marcel Büsching, Josef Bengtson, David Nilsson, Mårten Björkman},
  year          = {2023},
  eprint        = {2309.05418},
  archivePrefix = {arXiv},
}