I’ve been putting off covering this topic because there are so many parts to it that I worried that I’d end up just spouting technical things and confuse everyone. With that in mind, I focussed on probably the most confusing and important part of the whole crop vs full frame sensor debate for portrait photographers in this weeks video, but I want to go deeper in this post. The video is here:

Let’s start at the beginning. The sensor is the light-sensitive area of the inside of your camera which reads the image coming through from the lens and captures the pixel data there as an image. When we talk about full frame or cropped cameras, we’re actually talking about the sensor. The sensors are different sizes.

The difference between the dimensions of each sensor, with full frame being industry standard (the same dimensions as 35mm film, coincidentally), is the “crop factor” – a term you may know well. Here’s a rough explanation of this in practice (apologies for my not straight arrows):

As you can see, 36 divided by 23.6 is 1.53. This is usually rounded further, to 1.5. So this camera has 1.5x as a crop factor. Knowing this number is sometimes useful, but usually it’s just a number that exists and there it is.

Now, where this becomes relevant is when you use full frame lenses on crop sensor cameras, or really just any lens on a cropped sensor camera in the past. Stick with me here, there are exceptions and inventions to help us out but I’ll get to that in a second.

Lenses have circular openings but sensors are rectangular. The image does come into the camera as a circle, and the sensor only sees the parts that fall within its 4 straight sides. The extra curvy bits are discarded, never read. With a full frame lens on a full frame sensor, the image that comes in ends up just about perfectly sized to have as little “chopped” off as possible.

However, when you pop that lens onto a cropped sensor, the image reaching the sensor is exactly the same size – that doesn’t change. In fact, all you end up with is a smaller rectangle picking the middle section and it discards all the extra bits, they’re never read. Again, a diagram from me to explain:

There are a number of impacts that this ^ has on you and your photography. One is that with lots of the image spilling outside of the sensor, you’re actually wasting some light. This means that your images will probably be darker on a cropped camera than on a full-frame camera at the same exposure values in the same scene.

This is a consideration, and one you should have in your mind, but for me and the questions I get asked, which are almost all exclusively to do with bokeh and backgrounds, it has another implications. The literal crop.

So we’ve established that the sensor is smaller but the image gets thrown at it exactly the same – this means that if you (with a crop sensor) want to get the entire dog in the frame with the same set-up and composition as me (on a full-frame camera), you need to be further away. Using the screengrabs from the video at the top, this crop in is about this much:

Full frame left, cropped sensor right.

Your focal length isn’t changing – it is not changing. Yes, the reach is similar to that of 1.53x whatever focal length the lens is, but there aren’t all the other benefits that longer lenses have in play (compression, bokeh rendition etc). It’s not the focal length that is changing, it’s the amount of the image that is being discarded by the sensor (see the second diagram earlier).

Ok so we get that, but what does this actually mean? Well, you know that getting blurry backgrounds is a mixture of things, with my favourite and most important one being relative distances. The distance between you and the subject has a dramatic effect on the background, along with the distance between the subject and the background.

So using this knowledge and applying the crop sensor knowledge, we can already understand that when you move further away to take the same shot in the same location, you’ll impact the bokeh and therefore, you’ll impact the blurryness of the background. Like so:

So what can you do? Well, 3 things actually:

  • Move the subject further away from the background as you move further away from them
  • Use a SpeedBooster to make the image shrink to fit onto the sensor
  • Use the crop factor divided by the aperture at the crop factored focal length

That last one is probably a bit “whaaaaaa?!>£@[email protected]£@!?!” – break it down.

If I’m shooting full frame at 200mm, f2.8, to get the “same” image, you just need to be doing the maths:

  • 200mm divided by a 1.53 crop factor is about 130mm.
  • 2.8 divided by 1.53 is about 1.8.

So for us to have the same image in this scenario, you need to be using a 135 (closest option) 1.8 lens at f1.8 to match up.

This crop and aperture factor calculation is pretty reliable, except for when it isn’t possible. Say I’m on the 105 1.4 at f1.4 with my full frame camera:

  • 105mm divided by 1.53 is about 68mm
  • 1.4 divided by 1.53 is about 0.9

68mm f0.9 lenses do not exist and if they did, they’d be CRAZY MONEY.

So yeah, for me, this is the real impact of crop vs full frame cameras in portraiture. There’s a lot more to the physics and the tech but really, is that needed? If you have a crop sensor camera, no problem! Just be hyper-aware of your distances 🙂