This is another of those concepts I keep seeing presented in videos. Videos by respected and knowledgeable photographers and re-touchers, who otherwise demonstrate a complete and thorough understanding of the concepts they are explaining. For some reason though, they often have a blind spot when it comes to this one. It’s one of those where for most images it probably doesn’t matter too much if you don’t understand what is truly happening however it makes my eyes twitch (in a sort of Herbert Lom in the Pink Panther sort of way) whenever I hear this being said, so here’s my $0.5 worth on the subject 🙂
It concerns making adjustments to colour in an image usually using the curves adjustment tool (but is true of any tool that changes the brightness levels of individual colour channels). I’m going to demonstrate using curves. The claim, is that by lowering the contribution of one colour channel you increase the contribution of the opposite colour channel. The curves tool in Photoshop allows for adjustment of the Red, Green and Blue (RGB) channels separately and for all three together (for an overall adjustment in brightness). This leads to questions about adjusting other colours such as Yellow. Red, Green and Blue are the opposites of Cyan, Magenta and Yellow: RGB is the opposite of CMY, and C, M and Y are made from mixing R, G and B. This is all good, however every explanation I’ve seen online then goes off the rails. Let’s take Yellow as the target colour channel you need to increase. There is no Yellow colour channel in the curves tool (or levels etc). All the tutorials I’ve watched on colour tell you to do this by reducing blue. This will not work, and here’s why:-
Here’s the test bed we’ll be using to explore this: I have a black background (so no brightness at all in either red, green or blue). We have a Yellow patch, a white one and a blue one. I’m going to use a curves adjustment to increase or decrease the contribution of each colour channel and in this control image you can see it’s completely linear in all channels (and so having no effect).
Continue reading “No, taking away blue does not add yellow”
I started to write this one up as the next “photography myths” post and had planned some demo shots, however Neil Van Niekerk explained it all brilliantly on his blog here: Tangents back in 2009. Go there now and read it! Neil’s blog is full of great articles on using flash, and a whole boatload of other photographic technique. Here’s my summary, but for more detail – read Neil’s blog!
The next “myth and misdirection” you will come across as a photographer learning how to use flash, especially with natural/ambient light is this phrase:-
“Shutter speed controls ambient, aperture controls flash only”
Which is nonsense. If you stop down your aperture, it affects all light in the scene – there’s no special “back door” for the ambient light. The whole picture will get darker.
For manual controlled flash power within normal x-sync range of shutter speeds:-
- Shutter speed controls ambient light
- Aperture controls flash and ambient light
- ISO controls flash and ambient light
- The flash power adjustment buttons on your light, control…. flash!
For TTL automatic flash exposure systems (camera on manual, flash on TTL ) the flash exposure is constant, as long as it is within the bounds of the flash gun to supply enough light, and the camera and subject remain reasonably still in relation to each other. Close down the aperture? The flash puts out more light so the flash exposure is the same. Raise the ISO? The flashgun reduces the power so the flash exposure remains the same. Move the light? The flashgun adjusts and the flash exposure remains constant. The ambient light doesn’t react to these changes in shutter speed and aperture though, so ambient light exposure continues to change as before. Stop down the aperture? The ambient light exposure reduces – flash exposure stays the same.
So for TTL flash with normal x-sync:-
- Shutter speed controls ambient light
- Aperture controls ambient light
- ISO controls ambient light
- Flash exposure compensation controls – flash.
and finally, for “high speed sync” flash where the flash pulses to cover the entire shutter operation, the flash effectively becomes a continuous light so:-
- Shutter speed controls ambient light and “flash”
- Aperture controls ambient light and “flash”
- ISO controls ambient light and “flash”
- Flash power controls – “flash”.
It’s no surprise newcomers to photography get confused when so much of the received wisdom they hear is not quite right. Alongside “aperture controls flash, shutter speed controls ambient” this is one of the most ubiquitous myths that do the rounds. The converse is also untrue: wide angle lenses do not distort (at least not because of the focal length, they may of course have defects in their design and manufacture).
“But” I hear you say “if I take a picture of someone with a wide angle, their features look all distorted, they have a big nose and their entire head looks like a football! If I use a telephoto lens, they look much more natural.”
Well, before I tell you what’s really going on, here’s two images of my friend Gary, who bravely volunteered to have his ruggedly handsome features pulled around for the sake of science:-
Continue reading ““Longer lenses compress the scene””
I watched Tony Northrup’s video on how to interpret the test scores on DxOMark (click here to watch this). Tony and Chelsea’s videos are always well researched and I recommend them as a source of objective information. Moreover – Tony takes a scientific approach to the research. What does this mean? Well, science is all about fact and evidence, and if you find evidence that contravenes your hypothesis, you need a new, or least modified hypothesis. Tony does this – if he finds evidence that what he thought before was wrong, he changes what he says. Not everybody can grasp this as can be seen in the comments below his videos 😛
In this video he observed from DxO’s test charts that especially for the sensors in Nikon bodies, there was almost a 1:1 trade off in dynamic range for every stop you gained in sensitivity – so in theory, you were not really gaining anything. E.g., if you shoot at ISO 100, and then same thing again at ISO 200, you gained a stop of exposure, but lost a stop of dynamic range. The extra dynamic range in the ISO 100 shot, allows you to bring it up a stop in post to match the exposure of the ISO 200 shot, with pretty much the same results.
I have a bunch of Nikon bodies so I thought I’d test this. I used my D810 for these test shots. I took 3 shots in manual exposure. One at 1/320th of a second, f/8 and ISO 3200. Then I took the same shot but at ISO 100, and added 5 stop of exposure to it in Lightroom. As a reference I then kept it at ISO 100 and dialled in 5 stops of exposure time to get a clean shot at 1/10th of a second, f/8 and ISO 100. Here they are:-
Continue reading “ISO: does it really matter?”
The website for some very interesting new flashguns from MagneFlash mentions more than once that it outputs more light at faster shutter speeds than regular studio lights. Now, conventional wisdom is that a run of the mill studio strobe takes about 1/900th of a second to output its light which is a lot quicker than the length of time the shutter is open below max sync speed (lets say this is 1/200th of s second which will work for most modern SLR cameras), and so as long as the shutter speed is longer than this, you should get all the light right? I posted comment to this effect on LightingRumours. This measurement is to “t.01” or the point at which the output is 1/10th of the peak output. You need some sort of agreed point to make the measurement to compare lights. Continue reading “Shutter speed effect on slow studio lights”
What is the difference between raw and jpeg shooting?
Photography print magazines especially like to create whole articles on this subject – like there is a choice. There really isn’t – unless you’re a press shooter, you’d have to be mad to shoot jpeg if you care about your images. They like to go on and “compare” jpeg and raw images, as if you can actually view a raw file. If you open a raw file, it is converted there and then to fit into the colour space and range of tones that your display device can support. In effect then you are just comparing jpeg to another jpeg – just developed using a different set of values.
OK, before we get into this, lets be clear that file formats are not the issue here – yes jpeg is compressed: well guess what? Nikon cameras can do compressed raw files too (lossless and lossy) and I’ll be surprised if others don’t also. Compression is another debate, and does not really affect the image quality at all. However, as a side effect of making a jpeg, the camera *must* process the image – with no input from you; and that’s what is really at stake here – do you want default processing, or would you like some input to the process?
Continue reading “Why you’d be daft to shoot jpeg only.”
(or longer focal lengths compress features)
This is a classic example of correlation being mistaken for cause and effect. That is, 2 things being caused by a 3rd thing rather than one causing the other. When we shoot portraits, you’ll often hear people saying “don’t shoot below 50mm or you’ll get distortion of the subject’s features”. Or “shoot a longer lens to compress the background”. It isn’t the focal length that is causing these effects, it’s the distance between you at camera and your subject. The closer you get, the more distortion of features you get, as the nose (for example) is now proportionally a lot closer than the ears to your camera than if you shot further away. The shorter focal length is also caused by you being closer – to get a wider view to fit the whole face in.
Try this – take the same shot at 200mm and 50mm with the camera and subject in the same position and then crop the wider shot so the subject is the same size – they will look the same. In the same way the background is compressed when you move further away (ie the distance between subject and background is proportionally smaller to the distance between you and the background). A longer focal length is caused by the increase in distance, to keep the subject filling the frame.
So, rather than focal length causing the distortion, the focal length and the distortion are both caused by the close working distance.
I always wondered – “who died and made 35mm the reference size?” It’s popular for sure, but it seems a bit strange to call 35mm sensors “full frame” when there are plenty of bigger ones and even bigger slabs of film available. So I did a bit of research. “Full frame” does not mean 35mm film sized sensor. In fact it has nothing to do with the size of the sensor. “Full frame” refers to a type of CCD sensor that does not have “shift registers” interleaved between each photo-site to shift the values off the photo-diode. The space this frees up can be occupied by more photo-diode (around 70% surface area is light sensitive in a typical full frame sensor) making them more sensitive to light. They are called “full frame” as they shift the full frame out of the sensor to the storage array at a time. This also makes them subject to light smear as they are still collecting as the data clocks out line by line to the storage array. Without interline shift registers, full frame sensors are cheap to make, but require a mechanical shutter to stop and start light collection. Because of this they cannot do video, or live view.
Almost all digital cameras today are not full frame. They have Interline Transfer CCDs that are equipped with shift registers interleaved between the lines of photo-diodes to transfer the data off the photo-diode and onto an accumulation register (which can then be clocked out to the storage array while the light sensitive area is collecting photons for the next frame – allowing electronic control of the image start and stop and doing away with the need for a mechanical shutter (we then have to ask why manufacturers add mechanical shutters to their cameras – which cause all kinds of problems with vibration and for high speed flash – I don’t know why they do this and can’t find any explanation). Because they have shift registers next to the photo-diode, only around 30% of the photo-site is light sensing material. To compensate, these sensors have an array of “micro-lenses” on them to collect light from the full area of the photo-site and concentrate it down into that 30% area.
So, by this definition, if your camera, like mine, does video or live-view: it ain’t “full frame” – no matter how big the sensor is.
Another meaning of “full frame” originates in the movie industry. 35mm movie cameras using the full size film gate were called “full frame” or “full gate” https://en.wikipedia.org/wiki/Full_frame Smaller gates could be used to save film (which would run through the smaller gate slower).
This is also meaningless to 35mm digital cameras unless (as on some Nikon 35mm cameras) you have the option shoot with less than the full frame. Even then though, this makes no assumption about the actually size of the “full frame” – it could be anything.
This one is being trotted out with ever increasing frequency these days as manufacturers release new cameras with more pixels per square inch than ever before. Especially Nikon, who seem to have decided everyone needs to print at A3, with the D800 at 36Mpixel and even the new entry level Nikon DSLR the D3200 packing 24Mpixels.
I currently use a D700 most of the time which “only” has 12Mp spread across a 36mm sensor (“35mm” sensors and film are actually 36mm – go measure it if you don’t believe me ). It performs really well at high gain (the so called “ISO”). I was concerned about the performance of the D800 at high ISO ratings – if I switched, would I still be able to take pictures in relative darkness, handheld and so on?
Now, sure, if you took the same image at a high ISO rating with a D700 and a D800 and zoomed both to 100%, the D800 image would have more noise. But! You zoomed the D800 image more. A lot more. Zoom the D800 image to the *same* magnification as the D700 image at 100%, and what you’ll find is that the noise is the same. It makes sense – if you divide the same physical area of sensor into 3 times the pixels, but then squash the image down by a factor of 3, the extra noise is lost in the re-sizing. There will be some loss due to inefficiencies and the borders around the photo-sites sure, but on this comparison, the D800 shot looks better than the D700 (D3 on the test) to me (and about the same as the canon 5d MKIII):