Better documentation or even real-world units for camera properties

I think it would be awesome to use real-world units in camera properties such as field of view, shutter time, aperture size.

An easy, but super helpful first step could be to document what the values entered for these properties actually are based on/what units are being used. I couldn't find any information on that for the values mentioned above.

I'm using Cheetah3D once again to add CG elements to a live action music video. Thanks to HDRI light sources and backgroundless rendering this works quite beautifully, but when it comes to matching perspective and other aspects of the optical setup to the live action footage to give the CG elements an appropriate sense of realism, I currently have to resort to trial and error. For that it would be veeeery helpful to know what units all these values are in, or better, allow users to enter these values in common units (such as shutter speed in fractions of a second or fractions of the frame length, field of view in degrees or better yet focal length of an emulated lens).
 
Hi

I remember having this discussion before ... Field of view is a real world format (it's obviously in degrees), while the length of the lens wouldn't help as that is dependent on the backplate of the camera (i. e. it would only fit if you are using a full format camera). You have to find out what the field of view of the real lens is. In theory you should be able to get fov from the lens producer (usually on their website. The number is for full format cameras), then you could multiply it with the crop factor of your camera. The problem is, the cf is rounded and you would get something roughly the same but not nearly correct enough to be of any use.

If you get the correct fov you still need the correct distance from the camera-backplate to the object. Without those two it's guessing. In the mentioned thread were some techniques to match the cheetah-camera to a photo. What you could do would be to compute the fov of the lens roughly and then fit it by eye to the picture.

There was once an ios app called lens lab. With that you could compute the real fov for different common backplates (never knew how correct it was and if the format of the really used sensor is supported). If it's not around anymore, you could maybe find something similar to it.

I'm less sure about shutter time, if this really is the correct value in seconds. Other than for motion blur it doesn't help you at all (and if it's for that you can find out quickly). It wouldn't help you with the lighting because in Cheetah it doesn't have any effect on that (for that Martin would have to recreate a real world camera and add a possibility to add the iso value).

Aperture on the other hand is used for depth of field (dof) only. It doesn't have any influence on the lighting in Cheetah. An F-Stop number wouldn't help you much, probably, because this is in reality dependent from the backplate (and the correct position of the camera). For animation dof would result in longer rendering times, so I'm not sure if you do use it for that or if you try to recreate the correct lighting with aperture and shutter time in Cheetah - which wouldn't work at all. Even if Cheetah could do this, you would still have to correct the exposure manually by eye because with a hdr and Cheetah's lights you want get exactly the same amount of light than in reality. If you do use the aperture for dof, you have to do it by eye (or you could render a depth map and do the changes in post).

Please, don't get it wrong. But it would help you a lot to read a book about photography to really understand what you try to recreate (especially about the sensor). Anyway, keep in mind that Cheetah's camera is an idealized full format camera with an ideal lens (some physically perfect lens that couldn't be reproduced in reality). So there are no vignettes, no chromatic aberrations etc. (for example in theory the dof of a real camera should get deeper the smaller the aperture (bigger number)). In theory f/40 should get you a bigger part that's sharp in your picture. In reality you get less dof from a certain point, for example f/16 or whatever. This can even differ visibly from individual lens to individual lens form a model series).
 
Just a note: because of COVID-19, the Professional Photographer's of America site has made their catalog of online courses free for the next two weeks. I'm not sure if there are detailed mechanical camera courses or not, but understanding any aspect of photography can help 3D work.

https://www.ppa.com/education-unlocked
 
Being able to pick 35mm equivalent focal lengths and apertures off a popup menu would be super convenient. I'm a photography nut and figuring out how to replicate say a 135mm f2 lens is several visits to Google every time.
 
how to replicate say a 135mm f2 lens is several.

The app is called LensLab and it's still available. There are others around, that could help you (maybe even more).

With the aperture I usually don't try to recreate a certain lens but try to get a visually attractive dof, sometimes thinking that it would be nice to have a lens that could do that in reality.
 
I don’t want to look up stuff or use an app and I’m not looking to recreate the behavior of specific lenses, I just want the app we use to be more usable.
 
Pod, I hope you stamped with your foot while writing this :)

Of course, you're right. It would be great to have this (and a few hundred other things). But as long as it's not there some app, some info about the lens or so could help you or anyone else to achieve this now. For the full fledged virtual camera in Cheetah you'll probably have to wait a few years more.
 
It seems that everything is there in Cheetah, it's just a matter of linking some properties and doing the calculations internally. You have to click to select the camera object to get the field of view property, but then you have to click to select the Renderer to find out the format dimensions of the image. From those it's easy enough to figure everything else out. The trouble is Cheetah doesn't link those properties automatically. If you know the format, then the diagonal dimension is approximately equal to a normal lens for that image size. So if we had a couple of extra property boxes, and had Cheetah calculate and link everything internally, we could get an easy correlation between focal length, angle of view, and image size.

We could even have some presets for various formats such as 35mm full frame, 16mm movie, maybe 35mm Academy, 16x9 HD, etc. I've even seen some software give presets for 4x5 and 8x10 view camera stuff for Architectural work.
 
Back
Top