Motion Blur

Motion Blur

I note that we still need to render an animation to get motion blur and that the first frame doesn't show it. Could you please add a workaround (e.g. render a waste frame offscreen at lower SPP automatically before the start of a sequence with motion blur and/or a still frame with motion blur)?
 
I note that we still need to render an animation to get motion blur and that the first frame doesn't show it. Could you please add a workaround (e.g. render a waste frame offscreen at lower SPP automatically before the start of a sequence with motion blur and/or a still frame with motion blur)?

That would be great! I second that, specially for still renders.
 
+ 1

- shouldn't this go in the wish list :wink:

- a previous (waste) frame would not be sufficient; all the f-curves need to be extended backwards smoothly (I encountered this on Blender, where first and still frames are blurred but still look odd if the tangents weren't extrapolated flatly.
Therefore I learned always to start animating from the second frame to the one before the last).

- a smooth shutter curve and exposure times >1 would be nice too :smile:
 
For the first frame I understand a simple extrapolation wouldn't suffice. But sometimes I would like to preview the motion blur in a frame in the middle of an animation, and this could be done easily if there was a way to prerender the previous frame with a SPP of say 1 :lol: and then render the frame I'm interested in at a higher SPP. For me that would be enough. The first frame I can always discard it (planning in the animation setup beforehand).
 
- shouldn't this go in the wish list :wink:

- a previous (waste) frame would not be sufficient; all the f-curves need to be extended backwards smoothly (I encountered this on Blender, where first and still frames are blurred but still look odd if the tangents weren't extrapolated flatly.
Therefore I learned always to start animating from the second frame to the one before the last).

- a smooth shutter curve and exposure times >1 would be nice too :smile:

I’m not sure you are correct since the time should be sampled stochastically. Was the motion blur problem in cycles or the old renderer?
 
From what I found Cheetah and Blender do both vector blurring - what would be the standard method for path tracers.
That means a moving point is considered as a line and the samples stray along that line.
Where is that line derived from?
In Cheetah it is simply the connection from the position of the previous frame (n-1) to the current one.
For the first frame no previous position can be found so no blurring will happen.

In Blender the position is from n-0.5 to n+0.5 (shutter time = 1), so previous as well as following frames are concerned (even more if the time is > 2).

That results in the first frame getting only half the blur because the nonexistent previous frame doesn't contribute.

I can't test Blender now because it's on an animation render right now (with motion blur of course :wink: ) but can provide a Cheetah animation.

It looks like Cheetah stores vector data only in animation mode for the next frame respectively, while in Blender f-curve data are considered regardless of rendering mode.

The good thing with shutter times > 1 (= blur overlap) is that you can do slow motion effects after rendering by blending additional frames in between without the motion looking jerky.

And from what I see in the test gif cheetah does vector blur smoothing by default.

But there is no way to have the first frame blurred without defining a previous one with different position data to derive a vector from.
 

Attachments

  • moblu.gif
    moblu.gif
    199.6 KB · Views: 387
Last edited:
From what I found Cheetah and Blender do both vector blurring - what would be the standard method for path tracers.
That means a moving point is considered as a line and the samples stray along that line.
Where is that line derived from?
In Cheetah it is simply the connection from the position of the previous frame (n-1) to the current one.
For the first frame no previous position can be found so no blurring will happen.

In Blender the position is from n-0.5 to n+0.5 (shutter time = 1), so previous as well as following frames are concerned (even more if the time is > 2).

That results in the first frame getting only half the blur because the nonexistent previous frame doesn't contribute.

I can't test Blender now because it's on an animation render right now (with motion blur of course :wink: ) but can provide a Cheetah animation.

It looks like Cheetah stores vector data only in animation mode for the next frame respectively, while in Blender f-curve data are considered regardless of rendering mode.

The good thing with shutter times > 1 (= blur overlap) is that you can do slow motion effects after rendering by blending additional frames in between without the motion looking jerky.

And from what I see in the test gif cheetah does vector blur smoothing by default.

But there is no way to have the first frame blurred without defining a previous one with different position data to derive a vector from.

Based on this, having a single previous frame would do the trick, contrary to what you said before, since each motion blurred frame relies on only the frame before it. C3D doesn’t have any problems with f-curves on the first frames of normal animation so why would it here. The only real problem might be the first frame of an animation (frame 0) where no keys are in negative frames but I don’t care about that. When rendering motion blur I’m usually picking a frame somewheee in the middle of an animation.

I’m not sure how treating each point as a vector (as you’ve described it) would actually work for a path tracer since we’re following photons not geometry and the photon either bumps into something or it doesn’t. Vector blur is usually a post effect implemented by rendering the velocity vector of a given pixel into a channel and driving motion blur with that data after the frame is rendered. This is decidedly not how an unbiased renderer works.

I assumed that the path tracer stochastically samples time (presumably using optimization tricks) but this might be very costly (a simple optimization would be to quantize time and build one scene graph per time interval and pick one using a weighted random number for each photon to pass through. In fact you could pick real time values and bounce between the nearest quantum intervals each time a photon bounces diffusing the approximation error and you wouldn’t get any visible time “posterization”).
 
Last edited:
Having not found any information about path tracing motion blur on the web I'm just guessing here, but the behavior of the engines are pointing towards geometry based algorithms.

Firstly, neither Falcon nor Cycles can get it right on the first frame,
and secondly, and that's more important, there is not much cost in additional render time. All temporal approaches that I have heard about are slowing things down a lot. Your suggestion of building scene graphs per time interval doesn't sound like something that works as easy and fast like Falcon's motion blur.

...we’re following photons not geometry...

How could this be a contradiction ?
Every path of a photon is a geometric construct of course, then happens a bounce and we get a stochastic calculation which determines a new angle, the next leg of calculated geometry.
“Time" gets sampled as a bunch of paths, that is in the end an averaging of a distribution of geometric alternatives (which have resulted in rgb values).
I don't understand how you think you could evade geometry in path tracing algorithms, thats impossible to me.
The stochastic choices being made of course may simulate temporal distributions, but these are always geometric distributions as well, because photons travel different paths at different times.

In my opinion the thing works similar to DOF or reflection blurring.
In an unblurred image, there are defined points of intersection.
With blurring, these points get replaced by defined areas which require a different sampling but not more rays, basically the new ray angle is calculated regarding a different space of possibilities.

With motion blur the static intersection points don't turn into defined areas but defined lines (not even vectors because the direction doesn't count). Instead of changing from 0-dimensional (defined point) to 2-dimensional (defined area) that's a step into 1-dimensional (defined line) math to calculate the sample space.
It adds a little of calculation time, but similar to the other blurring algorithms, and that's what I see during rendering.

Maybe Martin can drop a hint how he did it :smile:
 
If Cycles and Falcon do temporal sampling, it’s not curves it’s whatever is correct (which, is as it should be for an unbiased renderer)

I’m not talking about “avoiding” geometry by the way, I’m saying that the photon picks paths stochastically in space and time, and the geometry (as in surfaces of meshes) is what is found at the given point in space and time. A “point” (in geometry) isn’t converted to a line or a curve, it’s just a point — at some specific space and time. Hence we follow photons not geometry (i.e. meshes, points, etc.)
 
Last edited:
The temporal-vs-geometry talk is just misleading, it all converges to the same pixel image :smile:

I made a new test: rotating an decentered cube by 360° each frame.
So in each frame the cube is at the same position, but in the time between it is on a circular path.

The weird thing is, that playing the animation in the 3D view is showing a rotation already!

But then the Falcon animation doesn't, which was expected when Falcon stores the position data of the last frame and then samples along the curve from last to current position.

If there was real temporal sampling in between frames (in my talk that would be sampling along f-curve paths) we should see a full circle blur.
 

Attachments

  • motionblur.jas.zip
    5.8 KB · Views: 236
Last edited:
Back
Top