Jump to content

How do you auto-focus?


Recommended Posts

As camera manufacturers keep adding more and more focus points to their cameras, I'm starting to wonder how reliable these multi-point focus systems really are. I find that they often yield unexpected results, even when all the focus points are activated. Consequently, I find myself using good old "central area" AF and selective spot focusing (where you move the focus point around) more often. The results are more predictable.

 

So, how do you like to auto-focus? Are multi-point AF systems with dozens of points a real improvement or just another marketing gimmick?

Link to comment
Share on other sites

Using my Canon DSLR I generally use centre point focus, and lock the focus. It is possible to set the camera up so that the focus is independent of the shutter release, assigning a button on the back of the camera for the purpose. This I find is the best method for static subjects, or those with a predictable trajectory. Relying on multi point focus places the focus point into the lap of the gods. For randomly moving subjects, the grandchild for example, I reset the camera to operate the focus using the shutter release, and hope for the best.  :rolleyes:

 

It is regrettable that the NEX range does not appear to allow the user to program a "back button", and you have to hold the shutter release half down to hold the focus. However I mostly use manual focus lenses on the NEX when the combination of contrast detection and 10x magnification enables accurate focus most of the time.

 

I still get some out of focus shots, for example if the light is difficult, or the sun is strong and shining into the view finder - could do with an eyecup. Despite this,  I find that I tend to get more keepers with the manual lenses, as I know precisely where the point of focus is.

Link to comment
Share on other sites

Using my Canon DSLR I generally use centre point focus, and lock the focus. It is possible to set the camera up so that the focus is independent of the shutter release, assigning a button on the back of the camera for the purpose. This I find is the best method for static subjects, or those with a predictable trajectory. Relying on multi point focus places the focus point into the lap of the gods. For randomly moving subjects, the grandchild for example, I reset the camera to operate the focus using the shutter release, and hope for the best.  :rolleyes:

 

It is regrettable that the NEX range does not appear to allow the user to program a "back button", and you have to hold the shutter release half down to hold the focus. However I mostly use manual focus lenses on the NEX when the combination of contrast detection and 10x magnification enables accurate focus most of the time.

 

I still get some out of focus shots, for example if the light is difficult, or the sun is strong and shining into the view finder - could do with an eyecup. Despite this,  I find that I tend to get more keepers with the manual lenses, as I know precisely where the point of focus is.

The NEX-6 is supposed to have something like 99 focus points. Is this a good thing? The gods are usually capricious to say the least.

Link to comment
Share on other sites

My knowledge of optics is pretty skimpy, but how -- in a three-dimensional world -- is it possible to focus simultaneously on several areas at different distances from the camera lens? I can't figure out how these automatic multiple point systems are supposed to work. Obviously, I'm missing something.

Link to comment
Share on other sites

I use to use the center point and then frame the image, but with moving subjects using new NIKON

DSLR's I am using AF and moving the focus point in a predictive way and to my surprise it is

working really well and after getting use to doing it this way I'm getting many more sharp images

of moving subjects.

Link to comment
Share on other sites

Phase Detection Autofocus – a method to approximately put the plane of focus somewhere near an object approximately selected by a point in the viewfinder that approximates the location of a dedicated sensor in the camera which is approximately calibrated to the camera’s image sensor. See also, Depth of Field.

 

Roger Cicala

    in The Cynic’s Devil's Photography Dictionary

 

Read more here

;-)

 

wim

Link to comment
Share on other sites

Phase Detection Autofocus – a method to approximately put the plane of focus somewhere near an object approximately selected by a point in the viewfinder that approximates the location of a dedicated sensor in the camera which is approximately calibrated to the camera’s image sensor. See also, Depth of Field.

 

Roger Cicala

    in The Cynic’s Devil's Photography Dictionary

 

Read more here

;-)

 

wim

exactly (approximately speaking)

Link to comment
Share on other sites

Thanks a lot, John. Something else for me to worry about in my workflow.  :(

 

With my Nikons, I mostly used Spot metering, but with my newer NEX cameras I tend to use Multi for both focusing and my exposure metering. I could not find the tread, but I believe David K suggested that I use Multi for my exposure when I was complaining about working with contrast light up in Central Park . . . if I've got this wrong, forgive me, David. As I've gotten older, getting things wrong is what I do best.  :wacko:

Link to comment
Share on other sites

On the Nikons I use Spot Metering and if I am looking for a lot of DOP then I will try and focus a third of the way up the frame. Or if it's on a tripod then I have this table on an iPad that allows for manual focusing that gives more accurate DOP results. Hyperfocul lengths.

 

CAUTION: You can't use normal lens tables or the lens barrel markings for hyperfocal focusing with a D800 and use the images at full size. The depth of field is significantly reduced in comparison to a D700, for example. If you do use the standard tables, then you need to downsize the image to around 5000 pixels long side maximum or better to around D700 size (4250 I think). 

 

Check it out. Take a shot of a subject where everything is beyond the hyperfocal distance (essentially infinity) using a tripod using hyperfocal focusing and then the same shot focused on infinity. Examine the images together at 100%. The infinity shot will be sharp, the other not.

 

I learnt this the hard way, assuming that what I had always done would work with a D800. It led to my first and only QC fail in several years. I was able to rescue my images by downsizing but I did some serious testing before I did anything else.

Link to comment
Share on other sites

Thanks a lot, John. Something else for me to worry about in my workflow.  :(

 

With my Nikons, I mostly used Spot metering, but with my newer NEX cameras I tend to use Multi for both focusing and my exposure metering. I could not find the tread, but I believe David K suggested that I use Multi for my exposure when I was complaining about working with contrast light up in Central Park . . . if I've got this wrong, forgive me, David. As I've gotten older, getting things wrong is what I do best.  :wacko:

Didn't mean to add an extra burden to your workflow, Ed. I just find that with the NEX-3's auto multi-point focus, I tend to get some unwanted surprises -- i.e. subjects on a plane either behind (usually) or in front of the main subject end up being in better focus. This can happen even when all of the camera's focus points light up on the main subject. David might have been referring to the "multi" metering mode as opposed to the focusing mode. I can't remember either. However, I do recall David mentioning that contrast detect AF has a tendency to "back focus."

Link to comment
Share on other sites

 

 

 The depth of field is significantly reduced in comparison to a D700, for example.

 

 

My understanding of the consistent nature of the laws of physics makes it impossible for me to understand how two cameras with the same size sensor and the same lens, at the same camera and lens settings, aren't going to have, projected onto the sensor surface, an image with identical depth of field. How does the image projected onto the sensor surface change from that point to the image shown by one of these cameras over the other?

 

dd

Link to comment
Share on other sites

 

 

 

 The depth of field is significantly reduced in comparison to a D700, for example.

 

 

My understanding of the consistent nature of the laws of physics makes it impossible for me to understand how two cameras with the same size sensor and the same lens, at the same camera and lens settings, aren't going to have, projected onto the sensor surface, an image with identical depth of field. How does the image projected onto the sensor surface change from that point to the image shown by one of these cameras over the other?

 

dd

 

 

They don't have the same size sensor. The D800 has a 36MP sensor whereas the D700 is 12MP. The differences in practice are extremely obvious when examined at 100%.

Link to comment
Share on other sites

 

 

 

 

 The depth of field is significantly reduced in comparison to a D700, for example.

 

 

My understanding of the consistent nature of the laws of physics makes it impossible for me to understand how two cameras with the same size sensor and the same lens, at the same camera and lens settings, aren't going to have, projected onto the sensor surface, an image with identical depth of field. How does the image projected onto the sensor surface change from that point to the image shown by one of these cameras over the other?

 

dd

 

 

They don't have the same size sensor. The D800 has a 36MP sensor whereas the D700 is 12MP. The differences in practice are extremely obvious when examined at 100%.

 

 

I meant physical size, the one that would be of interest to the laws of physics. And they DO have the same size sensor to 0.1mm tolerance: 35.9 x 24mm versus 36.0 × 23.9mm.

 

I fail to see how the image, at the point of falling on the surface of the sensor (or any surface for that matter) is going to change because under the surface there are differences. IF this is true, there is an explanation that fits the laws of physics . . . the number of photo-sites on a sensor is not relevant, surely.

 

dd

Link to comment
Share on other sites

 

 

 

 

 

 The depth of field is significantly reduced in comparison to a D700, for example.

 

 

My understanding of the consistent nature of the laws of physics makes it impossible for me to understand how two cameras with the same size sensor and the same lens, at the same camera and lens settings, aren't going to have, projected onto the sensor surface, an image with identical depth of field. How does the image projected onto the sensor surface change from that point to the image shown by one of these cameras over the other?

 

dd

 

 

They don't have the same size sensor. The D800 has a 36MP sensor whereas the D700 is 12MP. The differences in practice are extremely obvious when examined at 100%.

 

 

I meant physical size, the one that would be of interest to the laws of physics. And they DO have the same size sensor to 0.1mm tolerance: 35.9 x 24mm versus 36.0 × 23.9mm.

 

I fail to see how the image, at the point of falling on the surface of the sensor (or any surface for that matter) is going to change because under the surface there are differences. IF this is true, there is an explanation that fits the laws of physics . . . the number of photo-sites on a sensor is not relevant, surely.

 

dd

 

 

It's because the blur circle lies across more pixels on a 36MP sensor compared to a 12MP one.

 

When viewed at 100%, the blur is more obvious.

 

And that's why downsampling works as a remedy.

Link to comment
Share on other sites

 

 

 

 

 

 

 The depth of field is significantly reduced in comparison to a D700, for example.

 

 

My understanding of the consistent nature of the laws of physics makes it impossible for me to understand how two cameras with the same size sensor and the same lens, at the same camera and lens settings, aren't going to have, projected onto the sensor surface, an image with identical depth of field. How does the image projected onto the sensor surface change from that point to the image shown by one of these cameras over the other?

 

dd

 

 

They don't have the same size sensor. The D800 has a 36MP sensor whereas the D700 is 12MP. The differences in practice are extremely obvious when examined at 100%.

 

 

I meant physical size, the one that would be of interest to the laws of physics. And they DO have the same size sensor to 0.1mm tolerance: 35.9 x 24mm versus 36.0 × 23.9mm.

 

I fail to see how the image, at the point of falling on the surface of the sensor (or any surface for that matter) is going to change because under the surface there are differences. IF this is true, there is an explanation that fits the laws of physics . . . the number of photo-sites on a sensor is not relevant, surely.

 

dd

 

 

It's because the blur circle lies across more pixels on a 36MP sensor compared to a 12MP one.

 

When viewed at 100%, the blur is more obvious.

 

And that's why downsampling works as a remedy.

 

Russell's explanation is qualitatively correct I'm sure. It's to do with circles of confusion and viewing distance I'm expect. Too busy right now taking pictures and planning how I'm going to get off this mountain to investigate the maths but it is a matter of simple observation. So if the laws of physics fail to explain it, then there is a problem with our understanding of the laws of physics (not again!). 

 

Another thread hijacked. Sorry John.

Link to comment
Share on other sites

If you want quantitatively correct too ;) - all things being equal and for a blurred vertical or horizontal edge, the blur will span 1.73x more pixels on a 36.2MP sensor compared to a 12.1MP sensor.

 

The same applies to CA.

 

This is one of the reasons why the D800/E is such a cruel mistress when it comes to lens quality and technique.

 

Pixel peeping's bad, m'kay?

Link to comment
Share on other sites

If you want quantitatively correct too ;) - all things being equal and for a blurred vertical or horizontal edge, the blur will span 1.73x more pixels on a 36.2MP sensor compared to a 12.1MP sensor.

 

The same applies to CA.

 

This is one of the reasons why the D800/E is such a cruel mistress when it comes to lens quality and technique.

 

Pixel peeping's bad, m'kay?

There ya go, the laws of physics still stand :-) I knew someone could and would explain it thus :-) Thanks Russell.

 

I've done a lot of reading on this, and you're right of course, the circle of confusion has an effect . . . as does viewing distance (once printed especially) . . . but for us simple folk, there seems to be something approaching consensus that when viewed at "normal" size and at "normal" viewing distance the dof is the same (to the eyes of the observer). This does not argue against the principle explained abouve, but it does suggest, for us simple folk who might want to avoid the harsh mistress' hand-maiden (pixel peeping) and instead view at full-screen and no bigger, that advice to totally ignore dof indicators on lenses is a tad extreme. It seems for normal human vision, prints at or around 10" x 8" (or their decimal equivalents) from either D700 or D800 have little, if any, noticeable dof difference.

 

My reading bears out your description of the harsh mistress too, especially regards accuracy of focusing. If focusing was critical with the D700, it's hyper-critical with the D800. I'm glad my D700 and occasionally hired D4 are adequate for my photographic needs.

 

dd

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.