[ Beneath the Waves ]

A Detailed Introduction

article by Ben Lincoln

 

This article is intended to give a detailed and (more-or-less) accurate introduction to the world of multispectral photography. If you find yourself getting bored, A Brief Introduction will give you the executive summary version in a few paragraphs.

The Electromagnetic Spectrum is a Big Place

Most people have seen something like the following image[1] at some point in their schooling:

The Electromagnetic Spectrum (Logarithmic Scale)
[ EM Spectrum ]
EM Spectrum
       

Based on the definitions in ISO 21348.

 

It represents all of the forms that photons can come in, from radio waves (which extend indefinitely to the left of the chart), to high-energy gamma rays (which extend indefinitely to the right). It's easy to miss the little green vertical stripe that represents the light that we can see with our eyes. Think about that for a moment - every colour you've known since the day you were born - every sunrise, flower, exotic animal, painting, photograph, or film you've ever seen - has been represented by the thin sliver of the spectrum that we call "visible light". Most of these major bands represent a whole world of "colours" of their own, completely invisible to unaided human sight.

Physicists have devised ways to detect, measure, and image every major type of radiation[2] within this vast domain. Most of them require increasingly-exotic hardware the further away they are from visible light. Radio astronomers use tremendous parabolic dishes to collect very long waves from distant galaxies. Thermal imaging systems focus heat onto specialized detectors using lenses whose glass is completely opaque to human vision. Many of these bands are partially or completely blocked by Earth's atmosphere - even if your eyes were sensitive to X-rays, you would need to be right next to an object to see it in that part of the spectrum, because at farther distances the air itself would appear as an impenetrable fog[3].

While much of the spectrum is (at least today) beyond the ability of hobbyists and other non-professionals to explore directly, the bands immediately adjacent to visible light have enough similarity that they can be photographed using (sometimes repurposed) consumer equipment. The higher-frequency section of the near infrared[4] can be imaged with virtually any digital camera after a trivial modification[5]. Ultraviolet-A is harder to capture, but the sensors in Nikon's digital SLRs are sensitive enough to it that this region too becomes accessible.

However, capturing the raw imagery is only the first step. One of the many challenges faced by scientists and amateurs alike is how best to represent the results given the limitations of our senses.

Flatland for Colours

Flatland is a metaphor used in physics and mathematics[6] to help explain the concept of hyperdimensional structures (that is, geometry with four or more spatial dimensions[7]). I will avoid a detailed description here, but highly recommend Dr. Brian Greene's excellent The Elegant Universe because (whether you are skeptical of String Theory or not) among other things it includes a large number of analogies and metaphors useful in understanding some of the less-intuitive aspects of physics[8].

The general idea is to imagine how the inhabitants of a two-dimensional world would experience the interaction with that world by a three-dimensional shape or being, because the same sort of bizarre effects would apply to a four-dimensional object or being interacting with our own three-dimensional world. For example, just as you can see what's inside a circle drawn on a piece of paper when you look down from above on that two-dimensional plane, a four-dimensional being could see everything that's inside your body by looking at you from the corresponding vantage point "above" you in a higher dimension. Such a being could also "pick you up", rotate you 180 degrees along a hyperdimensional axis, then "set you down", with the result being that every atom in your body would have its position flipped as if in a mirror image.

However, for purposes of this discussion, the most important aspect of Flatland is with regards to the perception of higher-dimensional shapes. Just as a two-dimensional creature can only ever perceive a four-sided, two-dimensional slice of a cube, three-dimensional creatures like us cannot see the entirety of a four-dimensional object such as a tesseract (a 4D "hypercube"). The best we can do is to represent it using three-dimensional slices or shadows cast from the more-complex original.

This is important because a similar limitation exists when we move beyond the familiar red, green, and blue primary colours of light in the visible spectrum[9].

Here is a photo I took of the artificial rainbow created by the DIY punk version of Sir Isaac Newton's famous prism experiment[10]. The black electrical tape Xs are registration marks (used to align multiple elements in a composite image), and will come in handy a little later. The spectrum is a bit messy vertically because instead of a precise vertical slit, the white source light (a 500W halogen worklight) was passed through a gash cut in aluminum foil before being focused onto the prism by the magnifying glass of a "helping hands" apparatus from my electronics workbench. The result is functionally the same as Isaac Newton's for what is being described here.

Prism Rainbow 1
[ R-G-B ]
R-G-B
[ Red ]
Red
[ Green ]
Green
[ Blue ]
Blue
 

An artificial rainbow created using the method discovered by Pink Floyd Sir Isaac Newton. The three greyscale images represent individually the brightness of the red, green, and blue components of the image.

Date Shot: 2009-10-04
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 50mm
Filters: LDP CC1
Date Processed: 2009-10-04
Version: 1.0

 

In addition to the familiar colour image, I've included the three individual primary colour components (red, green, and blue), represented as greyscale images.

Similarly to the Flatland metaphor, imagine a person who sees the world in "black and white". They can tell how much visible light something is emitting or reflecting, but not distinguish between the colours of the rainbow. As long as they have a theoretical knowledge of concepts like "yellow" (an equal mixture of red and green light) and "cyan" (an equal mixture of green and blue light), they can compare these versions to determine that the image represents a rainbow, even if they can't see the rainbow in its entirety as a single photograph.

Here's a real-world example of the same concept:

Petunias 1
[ R-G-B ]
R-G-B
[ Red ]
Red
[ Green ]
Green
[ Blue ]
Blue
 

This flower bed full of petunias was one of the first subjects I used to test my photography equipment. Like the previous example, the three greyscale images represent the individual brightness levels of the red, green, and blue light parts of the image.

Date Shot: 2007-06-17
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 28mm
Filters: LDP CC1, B&W 093, B&W 403 and LDP CC1 stacked
Date Processed: 2009-09-14
Version: 2.0

 

By examining the three greyscale images, someone without colour vision can note that the flowers on the right equally reflect red, green, and blue light, so they have the property of being "white". The flowers on the left reflect medium amounts of red and blue light, but do not reflect green light at all (they are completely dark in the Green image), so they have the property of being "dark purple" (again, even though for this person "purple" is an abstract concept).

Now imagine a person who is red-green colourblind. Like about 6% of men, this person sees only two primary colours: they cannot distinguish between red and green light, even though neither are "invisible" to them in the sense that radio waves are invisible to all human eyes. However, they can perceive the difference between blue and the combined red and/or green light. In most situations, this is not as severe a limitation as those of us without colourblindness might first expect. The first image in this next sequence depicts the petunias in a way that simulates red-green colourblindness. Don't worry about the three-component naming scheme ("RG-RG-B") yet if it doesn't make sense. That will be explained later in this article.

Petunias 2
[ RG-RG-B ]
RG-RG-B
[ R-R-B ]
R-R-B
[ G-G-B ]
G-G-B
[ R-R-G ]
R-R-G
 

The first of these photos simulates how the petunia image appears to someone who is red-green colourblind. The second and third are false colour images that would allow someone with that sort of colourblindness to distinguish between red, green, and blue brightness using only two pictures instead of three.

Date Shot: 2007-06-17
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 28mm
Filters: LDP CC1, B&W 093, B&w 403 and LDP CC1 stacked
Date Processed: 2009-09-14
Version: 2.0

 

The second and third image in the above sequence illustrate how for someone who sees two primary colours, two images are sufficient to make a distinction between the red, green, and blue components of a photograph instead of the three that are necessary for greyscale. The "R-R-B" image allows such a person to determine that the flowers on the left reflect both red and blue light. The "G-G-B" image shows them that the flowers on the left do not reflect any green light (as well as reinforcing the blue-light reflectivity), so the flowers must be purple, even though to their unaided eyes they appear as a sort of greyish-blue. However, there is a third possible variation ("R-R-G") which discards blue entirely to allow a direct comparison between red and green levels, and in this case this final variation provides the most obvious visual contrast.

A Wider Spectrum and False Colour

Consider now a camera that can photograph not only red, green, and blue, but also near infrared and ultraviolet-A, giving a total of five primary colours to work with[11]. The camera I use can't capture all five bands at once, so the registration marks mentioned previously are there to allow multiple exposures shot from a tripod to be lined up correctly in post-processing. In less-contrived photographs the subjects in the picture would serve the same purpose.

Prism Rainbow 2
[ Near Infrared ]
Near Infrared
[ Red ]
Red
[ Green ]
Green
[ Blue ]
Blue
[ Ultraviolet-A ]
Ultraviolet-A
[ Full Spectrum ]
Full Spectrum
       

How can five primary colours be represented using the human-eye colourspace based on three? The "full spectrum" image displays near infrared and ultraviolet-A as white light, but this makes it impossible to distinguish between them, or the section of rainbow where red, green, and blue combine to form white light.

Date Shot: 2009-10-04
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 50mm
Filters: LDP CC1, LDP 1KB, B&W 093, Hoya R72, Baader U-Filter
Date Processed: 2009-10-04
Version: 1.0

 

The "full spectrum" image in this set is not very useful, because (as noted above), it doesn't provide the viewer with enough information to determine if the white areas represent near infrared, ultraviolet-A, or an equal mixture of red, green, and blue light. Our eyes only have receptors for three primary colours (at most)[12], so there is nothing left to assign to represent the two additional bands.

There are two main approaches taken here. Many scientists prefer the purity of the discrete greyscale images. There is certainly no ambiguity when viewing them - if an object is bright in the image that represents red light, there can be no confusion as to whether it reflects (or produces) red light.

On the other hand, greyscale images do not make use of the much greater "information bandwidth" that the three primary colour "channels" of our eyes can provide. This is where "false colour" comes into play. "False colour" is generally accepted to mean any image which uses colour to represent something other than what would be perceived by an unaided human eye, and comes in many forms. One of the most common is to pick three bands of the spectrum, and "map" them to red, green, and blue, in order of ascending frequency (corresponding to the relative positions of actual red, green, and blue light on the electromagnetic spectrum). If astronomers at NASA have obtained images of a stellar object using radio waves, far infrared, and X-rays, they will typically use red to represent the radio waves, green to represent far infrared, and blue to represent X-rays. If another object is imaged using microwaves, X-rays, and gamma rays, then (again, typically) red will represent microwaves, green will represent X-rays, and blue will represent gamma rays. There is no "right" or "wrong" way to assign these colours, it is simply that this is considered the most intuitive way to represent the data while maximizing the amount of information conveyed by a single image[13].

This is the main system I use in my own work, and just as everyone typically refers to "RGB colour", I will indicate which bands are represented using a three-component system where the first is used as the red channel, the second is used as the green channel, and the third is used as the blue channel. For example, "NIR-R-G" indicates that the red channel represents the near infrared version of an image, the green channel represents what humans would see as red, and the blue channel represents what we would see as green. "G-B-UVA" indicates that the red channel represents green light, the green channel represents blue light, and the blue channel represents ultraviolet-A light.

At this time, you may begin to think "that makes sense, but what happened to the blue light in the first image, and red light in the second?" Just like the colourblind people in the examples above, by bringing new spectral bands into the image, it is necessary to eliminate other information when translating multispectral data into a three-primary-colour image - there is a zero-sum result. In the two preceeding examples, the image was "shifted" in one direction or the other along the spectrum, with e.g. near infrared being shifted in but blue light being shifted out in exchange.

This sort of "shift" is not the only variation on this particular method. For example, the green and blue channels can be left alone but the red channel used for near infrared, resulting in an "NIR-G-B" image (using my naming convention). Spectral bands can be combined as well - I will commonly provide a variation in which the green channel represents the average of the entire human-visible part of the spectrum, with the red channel used for near infrared and the blue channel used for ultraviolet-A ("NIR-RGB-UVA"). However, this still represents a tradeoff, in that such an image alone cannot be used to distinguish between what humans would perceive as red, green, and blue light, only that one or more of them were present in the areas which are bright in the green channel.

Here is a set of false colour images which depict the artificial rainbow using all of the variations I will commonly include for multispectral photos[14], as well as one that I usually don't (1000nm-880nm-720nm[15]):

Prism Rainbow 3
[ R-G-B ]
R-G-B
[ NIR-R-G ]
NIR-R-G
[ NIR-R-B ]
NIR-R-B
[ NIR-R-UVA ]
NIR-R-UVA
[ NIR-G-B ]
NIR-G-B
[ NIR-G-UVA ]
NIR-G-UVA
[ NIR-B-UVA ]
NIR-B-UVA
[ R-G-UVA ]
R-G-UVA
[ R-B-UVA ]
R-B-UVA
[ G-B-UVA ]
G-B-UVA
[ NIR-RGB-UVA ]
NIR-RGB-UVA
[ 1000nm-880nm-720nm ]
1000nm-880nm-720nm
     

A variety of false colour representations of the five major colour components produced by the prism which a modified Nikon D70 can detect. The 1000nm-880nm-720nm version is a "miniature rainbow" that exists entirely in the near infrared, which I've included for those who are curious if there is a functional difference between the various types of near infrared bandpass filter.

Date Shot: 2009-10-04
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 50mm
Filters: LDP CC1, LDP 1KB, B&W 093, Hoya R72, Baader U-Filter
Date Processed: 2009-10-04
Version: 1.0

 

To make comparisons easier, I've created an image that combines the same "stripe" of each variation into a single image:

Spectrum False Colour Comparison
[ False Colour Comparison-Stripe ]
False Colour Comparison-Stripe
       

The same slice of the prism-generated rainbow as represented using a variety of false colour methods. The 1000nm-880nm-720nm stripe is a "miniature rainbow" that exists entirely in the near infrared.

Date Shot: 2009-10-04
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 50mm
Filters: LDP CC1, LDP 1KB, B&W 093, Hoya R72, Baader U-Filter
Date Processed: 2009-10-04
Version: 1.0

 

As you can see, each of these variations depicts a difference that another cannot. Typically only a handful of them will be the most artistically pleasing (although which handful depends greatly on the subject matter), but I will generally include all of the variations for those who are curious.

How does this play out in a real-world image?

Petunias 3
[ R-G-B ]
R-G-B
[ NIR-R-G ]
NIR-R-G
[ NIR-R-B ]
NIR-R-B
[ NIR-R-UVA ]
NIR-R-UVA
[ NIR-G-B ]
NIR-G-B
[ NIR-G-UVA ]
NIR-G-UVA
[ NIR-B-UVA ]
NIR-B-UVA
[ R-G-UVA ]
R-G-UVA
[ R-B-UVA ]
R-B-UVA
[ G-B-UVA ]
G-B-UVA
[ NIR-RGB-UVA ]
NIR-RGB-UVA
       

The petunias as represented using the same variety of false colour methods.

Date Shot: 2007-06-17
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 28mm
Filters: LDP CC1, B&W 093, B&w 403 and LDP CC1 stacked
Date Processed: 2009-09-14
Version: 2.0

 

As you can see, even for this unremarkable shot of someone's flower bed, false colour multispectral photography throws open a wide window to a world beyond the one we see directly. Note in particular the patterns that become visible on the flowers when peeking into the ultraviolet - most flowers exhibit this phenomenon to one degree or another (Bjørn Rørslett's site has a huge section dedicated to this topic), because bees and other pollinators see green, blue, and ultraviolet light, and so flowering plants have evolved patterns to attract them.

Five distinct colour channels only require a minimum of two false colour images of this type to provide all of the "raw data" (for example, NIR-R-G and G-B-UVA, and this even leaves room for a sixth channel due to the redundant representation of green), but as with the colourblindness simulation above, it soon becomes clear that certain variations provide better visual contrast depending on what is being photographed.

Beyond these variations, I will usually include at least tinted greyscale versions of the near infrared and ultraviolet-A versions of an image. If the result is interesting enough, I may also create a tinted greyscale version of the visible-light image. I tend to like tinted greyscale as opposed to straight greyscale for three main reasons: first, while I love photographic prints of greyscale images, they usually seem flat and lifeless to me on a computer monitor (although there are certainly exceptions). Second, I find that a tinted image provides better contrast to my eyes. Finally, it provides a way to quickly distinguish which band the image represents, even in thumbnail form. In order to retain this second aspect, I will consistently use the same general tinting tones for each major spectral band. Near infrared is almost always a very faint purplish-grey (based on the way it appears uncorrected when taken with my camera using a longer-wavelength filter such as the B&W 093) or reddish-orange (based on the way it appears uncorrected when shot with a shorter-wavelength filter such as the Hoya R72). Ultraviolet-A is generally a more intense shade of purple or pink (based on the way it appears regardless of which filters are used), although some of the older UVA images are a Venusian yellow-orange due to the flawed processing method I used initially. On the rare occassions that I include a tinted visible-light image, it is green. As I allude to in the Brief Introduction, these colours have no real relation to the parts of the spectrum that humans can't see, other than that e.g. blue or purple is usually used to represent ultraviolet because it's the closest colour that we can see[16]. Just because red is next to green on the spectrum doesn't mean those two colours look anything like each other to someone with normal colour vision.

These tinted images (as well as the original visible-light RGB variation) provide the basis for a more abstract false-colour method: using a greyscale image of part of the spectrum as the luminance component of an image, and a tinted (or colour) version of the same image as the colour component. For more on this type of false colour, see the Luminance/Colour Images article, which is in the Technical Information sub-section.

Here is one last set of the petunia photo depicting the tinted and more-abstract versions:

Petunias 3
[ Tinted RGB ]
Tinted RGB
[ Tinted NIR ]
Tinted NIR
[ Tinted NIR ]
Tinted NIR
[ Tinted UVA ]
Tinted UVA
[ NIR Luma / RGB Colour ]
NIR Luma / RGB Colour
[ NIR Luma / Tinted RGB Colour ]
NIR Luma / Tinted RGB Colour
[ NIR Luma / Tinted UVA Colour ]
NIR Luma / Tinted UVA Colour
[ UVA Luma / RGB Colour ]
UVA Luma / RGB Colour
[ UVA Luma / Tinted RGB Colour ]
UVA Luma / Tinted RGB Colour
[ UVA Luma / Tinted NIR Colour ]
UVA Luma / Tinted NIR Colour

Some more art-centric false colour representations of the petunias.

Date Shot: 2007-06-17
Camera Body: Nikon D70 (Modified)
Lens: Nikon Series E 28mm
Filters: LDP CC1, B&W 093, B&W 403 and LDP CC1 stacked
Date Processed: 2009-09-14
Version: 2.0

 

There are many other possibilities for making false colour composite images. In the world of synthetic aperture radar, it is very common to make use of polarimetry - comparing the reflectivity of an area not in terms of different wavelengths, but different polarizations of the same wavelength. This can be done in (and around) the human-visible spectrum using a camera with a polarizing filter, but I have yet to take any example pictures that show the sort of dramatic results that result from radar applications.

If you've made it to the end of this article, congratulations! You know more about multispectral photography than virtually anyone else on Earth. If you'd like to read more, I highly recommend Bjørn Rørslett's site. Bjørn is an incredibly talented professional photographer, and pioneered most of what I do as part of this hobby. None of this would exist without the vital information he posted on his site in terms of cameras, affordable lenses, and filters. In addition, because he has worked with the technology for so long, he has experience with and examples of things like the false colour multispectral film that Kodak used to produce.

 
Footnotes
1. Probably the most visually impressive version I've seen is this Westinghouse Research Laboratories poster, hosted over at Michael Munroe's site. Unfortunately, it seems to be long out-of-print (if it was ever mass-produced for the public in the first place). I have ThinkGeek's EM spectrum poster, which is also excellent.
2. "Radiation" is used here in the literal sense; that is, an emission of electromagnetic energy. For example, Earth's sun radiates visible light, which is reflected off of objects into our eyes. Dangerous radiation (e.g. from uranium or plutonium) is only one sub-type of the whole.
3. This is one of several reasons that astronomers like to deploy space-based telescopes. By putting their equipment in the vacuum of space, they can obtain images that would be difficult or impossible to capture with telescopes on Earth.
4. Not to be confused with mid or far infrared - see Thermal versus Near Infrared.
5. See Cameras.
6. The term and concept are from Edwin A. Abbott's late-19th-century short story Flatland: A Romance of Many Dimensions. As the copyright is long-expired, free PDF versions can easily be obtained by the reader who is interested in pursuing this topic further.
7. Not to be confused with Einstein's four-dimensional theory of spacetime. In Einstein's model there are three spatial dimensions and one time dimension. Hyperdimensional geometry involves at least four spatial dimensions, a concept that can be very difficult to wrap one's three-dimensional head around. While some branches of physics (most notably String Theory) theorize additional spatial dimensions beyond the familiar three, the concept itself is useful mathematically regardless of whether or not those additional spatial dimensions actually exist in our universe. One commonly-used example is the "quaternion", a four-dimensional value that can be used in 3D computer software to represent the rotation of an object and thereby avoid the problem of "gimbal lock" that results from the traditional 3-axis Euler Angle system.
8. The later novels in Rudy Rucker's 'ware cyberpunk series (Software, Hardware, Wetware, Realware, etc.) also include a Flatland-inspired take on hyperdimensional beings.
9. If you are more familiar with paint than light, you may be asking at this time "I thought the three primary colours were red, yellow, and blue!" This is the distinction between the "additive" colour model (which describes how light behaves), and "subtractive" colour (which describes how, for example, the perceived colour of paint will change when several colours are mixed together).
10. This rainbow had to be generated artificially, rather than from sunlight, because the apparent motion of the sun through the sky would cause the rainbow to shift its position over time. A consistent position was necessary for creating multiple exposures, which will be explained shortly.
11. Photographers (especially multispectral photographers) will note that the ultraviolet-A exposure is significantly noisier than the others, and includes a faint "ghost" to the left of the bright band. Unlike sunlight, the worklight I was using for this experiment does not put out much UVA. The only way I was able to obtain an image was to set my modified D70 to its highest sensitivity (ISO 1600, three stops higher than the standard). Even at that setting, it still required an 8-minute exposure to capture enough UVA light. This type of scenario (which is essentially nonexistent in most multispectral shooting) also exposes a weakness of UVA bandpass filters. It is difficult or impossible using current technology to create a single filter which will pass ultraviolet light but completely block infrared and human-visible light. The Baader U-Filter I typically use is worlds better than anything else available today, but still shows a hint of near infrared light leakage in extremely low-contrast situations like this one. The possibility of this impacting field photography is low to nonexistent - see Thermal versus Near Infrared for evidence of what a stunning success the U-Filter is in this respect.
12. Well-read readers may be aware that the human eye does have a small amount of sensitivity to the near infrared and ultraviolet-A. However, our eyes do not perceive other of these as distinct colours, but only "dark" or "bright" (much like the greyscale-visioned example person from earlier in the article). In addition, our sensitivity to them is generally so low that they only come into play in extremely low light or very contrived conditions (such as wearing IR-bandpass filters as goggles). There are a few animals known to possess receptors for four or more primary colours - most notably some species of tropical bird and the mantis shrimp. The latter is the only creature with "hyperspectral vision", far surpassing the eyesight of any other animal on Earth in terms of colour perception. The mantis shrimp is an especially odd case, because one of the main reasons theorized for the range of human eyesight is that red, green, and blue light are the only parts of the spectrum effectively transmitted by water, where our distant ancestors first evolved vision. Although they can sense a huge range of spectral variations, mantis shrimp must be very close to their subject to make use of much of that range.
13. Cynical scientists will remark that the primary purpose of false colour images (especially those produced by NASA) is to provide "pretty pictures" in order to attract funding. I can't argue that this is not at all a factor, but I would dispute the idea that this is the main goal. We have receptors for three primary colours; why not use them to their full potential?
14. Expert and clever readers will have noticed that while I've included the "squashed visible-light" NIR-RGB-UVA variation, I haven't included others that involve combining channels (NIRR-G-B, NIRRG-B-UVA, R-G-BUVA, etc.). This is mainly because using my current manual processes, the number of variations to produce would become overwhelming, and that's only considering methods where the combined channels are averaged, as opposed to being weighted on a curve or by other means. One of my professional skills is in process automation and tool development, and eventually I would like to create a toolset that would make a larger range of variations easier to produce.
15. Although there is a noticeable "miniature rainbow" created by the three near infrared sub-bands in this example, I have never found a significant real-world difference in reflectivity between them for a given object, so I don't invest the time in capturing more than one of them.
16. A potentially-interesting piece of trivia is that despite popular perception (including the ISO standards body, as evidenced by the content of ISO 21348), the spectrum produced by a rainbow (or prism) does not actually include "purple" light. Although "purple" is often used interchangeably with "violet", the reality is that violet is the colour beyond blue, whereas purple is the result of mixing blue light with red light. On a colour wheel, these are the same thing (because blue wraps back around to red), but this is not true of the electromagnetic spectrum (which is linear). Colour wheels are designed in this manner because the red receptors in human eyes are somewhat sensitive to violet light, so to us it can seem as if "purple" and "violet" are the same thing (or at least very similar), but this is only due to the limitations of our senses. Current consumer-level RGB display systems cannot accurately depict violet, so I only feel slightly guilty using purple and pink to represent ultraviolet.
 
[ Page Icon ]