WHERE DO YOU SPEND MOST OF YOUR TIME IN THE PHOTOGRAPHIC LIGHT SPECTRUM?
I found the following quote in a website for View Camera Australia, to which I subscribe:
“Photography is a medium much practiced but seldom explored. In the mainstream, the photographer concerns his/herself (sic) with recording the intersection of visible space and time, which together add up to a ‘moment’. They curate and present their world as it is experienced at the surface.
Photography, however, has the potential to offer a vision into the incredible world beyond the visible spectrum. It can function outside the limitations of comfort and understanding. Its jurisdiction spans from gamma rays to the infrared, it can image the invisible and is not constrained by our narrow conception of time.
This exciting realm is that of photographic alchemy.”
It’s a thought-provoking quote. In effect, it states that the vast majority of photographs are taken of subjects we see with our eyes – in other words the “normal” or “visible spectrum”, but goes on to say there is far more to photography than just capturing the “moment”, or, as Henri Cartier-Bresson stated, “the decisive moment”.
There is also the “invisible spectrum”, which offers us a world of photography outside that which we can “see” or that which allows the “decisive moment” to be captured. This invisible-to-sight world provides the opportunity for each of us to expand our photography into areas that present an incredible opportunity to capture something different.
There are some reasonably well-practiced techniques that take our photography away from the “moment”. An example of this is the creation of the veiled water effect of a waterfall using time exposure. Thus the technique of using long exposure times is effectively capturing “multiple moments” into a discrete image. This is just one example of moving away from the “decisive moment” of Cartier-Bresson but not outside the visible spectrum.
Stitching several images into a panorama is another example of capturing multiple moments then combining them into a single image. I don’t consider the stitching of several images into a single panorama to be too much of a departure from the “moment”, as each image is the capture of a “moment” adhered together to form a single panoramic view. The view that has been stitched was there at the time and has been represented as one single expanded image of the view that existed at the time several images were captured. This is not a criticism of this form of photography, as stitching images is a legitimate form of photographic expression and before digital imaging came on the scene there were plenty of examples of several printed photos being adhered together, usually by tape, to give a panoramic representation of the captured scene. I recently saw such a technique used by someone who had taken several images many years ago of the Blue Lake at Mount Gambier. Together the prints gave an excellent image of the entire lake that would have been impossible without using an extreme wide-angle lens. Such a lens would have to be a fisheye lens with the inherent characteristics of that lens that could not have given a viewpoint the same as a “normal” 50mm lens, let alone the expense of such lenses and limited opportunity for use.
A technique for exploring outside the visible spectrum entails the use of special filters that change the “natural” image we see with our eyes. Polarising filters do this to some extent, but using special films such as infrared (B&W as well as colour) with the use of special filters gives us the opportunity of “seeing” something that doesn’t exist in our visible spectrum, but does exist to some extent for birds and insects such as bees. Just imagine the images that could be presented by bees if they were able to “photograph” the scenes and flowers they “see”. The manipulation of “normal” photographic scenes to depict them as infra-red is, I understand, something that can be done through image manipulation software but I don’t think this can achieve the same results as can be achieved using special films. (I stand to be corrected).
The following is an example of a film-captured infrared image.
It can be seen that the sky is very dark and the foliage of the tree almost snow white. The reason for the sky being so dark results from the use of an “extreme red” 720nm filter. An extreme red 720nm filter works by preventing light with a wavelength less than 720nm passing through the filter. Blue light (such as the sky) has a wavelength well below 720nm therefore the film will be almost unexposed for the sky and the resulting “positive” photographic print will be almost black where sky is depicted. On the other hand, infrared light beyond 720nm, such as that absorbed and reflected by sunlight on foliage, will register markedly on the film and print in a snow-like white. It stands to reason the best infrared images result from bright sunlit days at a time when “normal” images can suffer from “flatness” of light caused by direct bright sunshine. How often have we heard a judge comment that the photo was taken at the “wrong time of day”, or that the “lighting was not kind to you”, particularly in relation to landscapes.
So that’s one form of photo imaging that “offers a vision into the incredible world beyond the visible spectrum”.
Note that a nanometre is a unit of length in the metric system, equal to one billionth (one thousand millionth) of a metre.
One aspect of using a 720nm red filter is that exposure times are quite lengthy when compared to the usual shutter speeds. Shutter speeds such as 10 seconds are not unusual. It can be seen from the following image just how dark a 720nm extreme red filter is compared to a normal red filter, the latter used to darken sky and bring out cloud detail but still show foliage similar to that of an image taken without the red filter, and certainly not exhibit the “snow-white” foliage effect using a dedicated IR filter on a bright sunlit day.
The forgoing is an introduction to imaging the invisible and I hope encourages more interest in members to explore that area of photography hidden from normal view.
Nanometres and human spectral sensitivity:
Humans have the ability to see the colour spectrum from violet to red. Beyond those extremes are ultraviolet and infrared. This range can be seen in a vivid rainbow. Expressed in nanometres, it is the light range from about 380nm to 740nm. Below 380nm is ultraviolet and above 740nm is infrared, both of these extending outside the capacity of human sight.
The humanly visible range of colours is: Violet (380-450nm), Blue (450-485nm), Cyan (485-500nm), Green (500-565nm), Yellow (565-590nm), Orange (590-625nm), Red (625-740nm).
There are no distinct boundaries between the colours, which merge between the ranges indicated in the above chart. Thus a colour in the area of 500nm will be “cyan-green” and so on. The chart clearly indicates that an extreme red 720nm filter will therefore have the greatest effect on the range furthest from it, so the violet/blues will be all but stopped from passing through the infra-red filter.
Using the chart as a guide, it can be seen that a yellow filter will prevent much of the spectrum below it registering on normal panchromatic B&W film (which is sensitive to the full human colour spectrum), whilst allowing those above it to register. This is why a yellow filter is referred to as a “cloud” filter for B&W photography, as it “darkens” blue sky (not dramatically) so that clouds are more pronounced whilst still allowing the colours above it up to the visible spectrum of red, to be exposed on the film. Orthochromatic B&W films are not sensitive to red. This allows them to be developed in the darkroom using a red darkroom light without becoming fogged, whereas Panchromatic film, sensitive to the entire light spectrum, will fog unless kept and developed in total darkness.
The data in the chart above is taken from the Wikipedia entry for “Visible Spectrum”. It’s well worth reading this and related links regarding colour, as “normal” photography is totally dependent on this visible range. That includes digital and film. There is an extremely interesting section on the colour spectrum sensitivity of insects, birds, dogs, goldfish and snakes!
Modern digital sensors are able to detect radiation well outside of the visible range. However they are restricted to the visible spectrum by strong IR and UV filters inside the camera. It is possible (at some expense) to replace the filter with plain glass and make the camera IR and UV enabled (and unable to be used for normal photography). The alternative is to use an externally mounted strong Visible light filter and try and overcome the internal IR and UV filter by using long exposure times. See the article on Infrared photography by Jack Dascombe