Multi-Layer Depth of Field Rendering with Tiled Splatting

In this paper we present a scattering-based method to compute high quality depth of field in real time. Relying on multiple layers of scene data, our method naturally supports settings with partial occlusion, an important effect that is often disregarded by real time approaches. Using well-founded layer-reduction techniques and efficient mapping to the GPU, our approach out-performs established approaches with a similar high-quality feature set.

Our proposed algorithm works by collecting a multi-layer image, which is then directly reduced to only keep hidden fragments close to discontinuities. Fragments are then further reduced by merging and then splatted to screen-space tiles. The per-tile information is then sorted and accumulated in order, yielding an overall approach that supports partial occlusion as well as properly ordered blending of the out-of-focus fragments.