A new frame interpolation algorithm leverages implicit flow encoding and a hyper-network to synthesize intermediate frames in dynamic scenes, specifically addressing the limitations of existing methods under varying illumination. While traditional flow-based approaches degrade when source images exhibit inconsistent brightness, this method utilizes pre-trained optical flow models robust to lighting changes. It employs a coordinate-based neural network to estimate intermediate flows by taking spatial and temporal coordinates as input. To ensure these flows are interpolatable, a hyper-network simultaneously encodes them within a constrained high-dimensional space. Comparative tests against state-of-the-art models like ABME and FILM demonstrate superior performance in both indoor and outdoor settings, effectively eliminating blurry estimations and structural artifacts in scenes with significant natural light shifts or complex object motion. This approach enables the generation of high-quality, captivating videos from near-duplicate images even when captured under inconsistent environmental conditions.
Sign in to continue reading, translating and more.
Continue