Scale Model Photography Header Image

Scale Model Photography

Depth of Field

One of the challenges with photographing scale models is keeping the entire model in focus.  Small objects like our models, require that the camera be close to the subject relative to the focal length of the lens and sometimes this can prove challenging as it exceeds the optical capabilities of the lens and camera to render a satisfactory depth of field.

Depth of field is defined as the “range of distance that appears acceptably sharp.”  There’s nothing precise about “range” ... “appears” ... or “acceptably.”  The reason depth of field can be confusing is that we’re talking about perception.

Some of the variables that can affect the DOF are: Focus Distance; Aperture of the lens; Diffraction; Depth of Focus; Sensor Size; Post Processing; Print Size.

How these factors relate to each other can also be affected by the hardware.  Not all cameras, lenses and displays are the same, so while I’ll be talking about the individual factors that effect DOF, the information is basically a guideline.  Ultimately you have to do some playing around to maximize the results with the equipment have available.

Depth of field doesn’t change abruptly from unsharp to sharp, the transition is gradual. In fact, everything in front or back of the actual focusing distance begins to lose sharpness, although it may not be perceived by our eyes.  The term CoC is used to describe how much a point needs to be blurred in order to be perceived as unsharp.  When the CoC becomes perceptible to our eyes, the area is said to be outside the depth of field.  One rule of thumb is that a circle is considered acceptably sharp when it would go unnoticed in a standard 8x10 print viewed from 1 foot.  Camera manufacturers assume a CoC to be negligible if it isn’t larger than 10 thousandths of an inch.  This is the standard that they use for the depth of field markers you see on lens.

The two most common variables you can use to control depth of field are Focus Distance and the Aperture of the Lens.  Simply watching the lens depth of field markers will demonstrate how depth of field increases as the focus distance increase.  So rather than filling the view finder with the subject, you can easily increase the DOF if you back off from the model or zoom out a bit if you’re using a zoom lens.  However, you don’t want to move back so far that you finally end up with a low resolution image.

This effect has nothing to do with the focal length of the lens.  If the size of the subject is kept constant, the DOF stays basically the same with any focal length lens.  What does change is the distribution of the DOF.

The second common variable is the aperture of the lens.  The smaller the aperture, the greater the depth of field.  There are two considerations when using small f-stops ... Diffraction and Depth of Focus.  Diffraction is an optical effect that occurs when light is bent as it passes through a hole.  All lenses exhibit some amount of diffraction and it increases with smaller f-stops like f-22 or f-32.  Not all lenses handle diffraction the same and every lens has its sweet-spot which is generally two stops down from its maximum aperture.

The second consideration is depth of focus.  A rudimentary way to think about depth of focus is to think of it as depth of field, but between the lens and the sensor.  The angle of the light and how well it’s focused on the sensor plane creates its own sort of a circle of confusion.  How this relates to the individual pixels on the sensor can have an effect on image sharpness.  Large apertures create wider angles that have a narrower depth of focus, while smaller apertures result in a narrower beam that has a greater depth of focus.

Another way to effect how light is focused on the sensor is by manipulating the lens plane.  The most common example is a large format view camera, however the same mechanics can be adapted for a small format camera as well with the use of a bellows and rails. or to a lesser degree within a single lens known as a tilt shift lens.

Even though we do our best to maximize DOF optically ... how the image is manipulated in processing can mitigate many of our limitations.  For instance, just reducing the original image size has the effect of sharpening the entire image.  An image that looks soft at a width of 5500 pixels will appear sharper when reduced to 1200 pixels ... and if the image is destined for the web, it may look just fine.

Another method for improving depth of field is to use a series of images merged together in a process called focus stacking.  I cover this technique at the end of the video on depth of field.

Color Balance

The goal here is to develop a workflow that will capture the maximum amount of information with the most accurate color rendition.  Because there are so many variables to how an image is viewed, we need to be tied to a reliable control.  Proper white balance at the time of exposure is the best way to capture all the information and maintain the correct relationship between the three color channels.  Rather than relying on auto or some sort of manufacturer preset, the most accurate way to set correct white balance is by using the custom white balance function of the camera.  This lets the camera analyze the color information from a neutral gray card exposed under the actual lighting and adjust the color balance to yield a neutral gray image.  The camera stores this setting and uses it to adjust any further exposures.

The video below goes into the details of what’s happening in the camera and how you can handle color correction in your video editing.