On top of this, reflections are very hard to reproject. Since they are view dependent simply fetching the motion vector from the current pixel tends to make the reprojection "smudge" under camera motion. Here's a small video grab that I did while playing Uncharted 4 (notice how the reflections trails under camera motion)

Last year I spent some time trying to understand this problem a little bit more. I first drew a ray diagram describing how a reflection could be reprojected in theory. Consider the goal of reprojecting the reflection that occurs at incidence point v0 (see diagram bellow), then to reproject the reflection which occurred at that point you would need to:

- Retrieve the surface motion vector (ms) corresponding to the reflection incidence point (v0)
- Reproject the incidence point using (ms)
- Using the depth buffer history, reconstruct the reflection incidence point (v1)
- Retrieve the motion vector (mr) corresponding to the reflected point (p0)
- Reproject the reflection point using (mr)
- Using the depth buffer history, reconstruct the previous reflection point (p1)
- Using the previous view matrix transform, reconstruct the previous surface normal of the incidence point (n1)
- Project the camera position (deye) and the reconstructed reflection point (dp1) onto the previous plane (defined by surface normal = n1, and surface point = v1)
- Solve for the position of the previous reflection point (r) knowing (deye) and (dp1)
- Finally, using the previous view-projection matrix, evaluate (r) in the previous reflection buffer

By adding to Stingray a history depth buffer and using the previous view-projection matrix I was able to confirm this approach could successfully reproject reflections. You can see in these videos that most of the reprojection distortion in the reflections are addressed:

Ghosting was definitely minimized under camera motion. The video bellow compares the two reprojection method side by side.

LEFT: Simple Reprojection, RIGHT: Correct Reprojection

(note that I disabled neighborhood clamping in this video to visualize the reprojection better)

Unfortunately keeping a copy of the depthbuffer (and sampling it multiple time per pixel) is not really a feasible/appealing solution. But it was a good exercise to understand the problem.

So instead I tried a different approach. The new idea was to pick a few reprojection vectors that are likely to be meaningful in the context of a reflection. Originally I looked into:

- Motion vector at ray incidence
- Motion vector at ray intersection
- Parallax corrected motion vector at ray incidence
- Parallax corrected motion vector at ray intersection

Screen space reflections is one of the most difficult screen space effect I've had to deal with. They are plagued with artifacts which can often be difficult to explain or understand. In the last couple of years I've seen people propose really creative ways to minimize some of these artifacts that are inherent to ssr. I hope this continues!