Print
Univis
Search
 
FAU-Logo
Techn. Fakultät Website deprecated and outdated. Click here for the new site. FAU-Logo

Multi-Illuminant Dataset

Evaluating the performance of color constancy algorithms is a challenging task. Traditional datasets have been created under the assumption of a single, globally uniform illuminant. This assumption can be directly enforced in laboratory conditions, and holds also for natural images to some extend. However, a large number of real-world images contains at least two dominant illuminants.

To address this issues, we recently proposed two approaches to recover the illuminant color locally (see also our color and reflectance page). Implicitly, a local estimation allows to compensate multiple illuminants more accurately. However, the evaluation of such methods becomes much more difficult. Ground truth for multi-illuminant scenes is generally a tradeoff between scene realism and precision.

For our work, we chose an approach to obtain almost pixelwise ground-truth on laboratory data.

  • We set up a scene, fixed the camera position and the positions of the illuminants.
  • We took pictures of the scene under different combinations of illuminants and different shutter speeds. All other camera settings were kept fixed.
  • We painted the scene gray.
  • We took a second series of pictures with the same parameters as before, i.e. the same illuminants, same shutter speeds, and with all remaining settings fixed.
  • From the gray scenes, we computed the ground truth.

The gray color was reflecting almost neutral. In separate experiments, we evaluated the color impression of the paint. Note that the so-obtained ground truth discards all effects from interreflections. However, for our task at hand, we assumed that interreflections play a subordinate role.

Data Collection

This data was captured with a Canon EOS 550D and a Sigma 17-70 lense. The aperture and ISO settings are the same for all the images. The scenes were light using two Reuter's lamps and ambient light. The data is available as PNG images without gamma and without automatic white balance. Upon request, we are also happy to provide the RAW Files (*.cr2), which are omitted from the download link to reduce the download size. Only a simple debayering was applied by averageing the green channels.

Filters used:

  • lee204  - LEE Filter 204 Full C.T. Orange
  • lee285  - LEE Filter 285 Three Quarter C.T. Orange
  • lee205  - LEE Filter 205 Half C.T. Orange
  • lee201  - LEE Filter 201 Full C.T. Blue
  • lee281  - LEE Filter 281 Three Quarter C.T. Blue
  • lee202  - LEE Filter 202 Half C.T. Blue

Colors used:

  • RAL 7001
  • RAL 7012
  • RAL 7035
  • RAL 7047

Scenes:

  • colorchecker (15*3 images)
  • reference (9*3 images, RAL 7001, RAL 7012, RAL 7035, RAL 7047)
  • chalk (17*3 + 17*3 images, RAL 7047)
  • fruits (17*3 + 17*3 images, RAL 7035)
  • figures (17*3 + 17*3 images, RAL 7047)
  • rabbit (17*3 + 17*3 images, RAL 7035)

"colorchecker" and "reference" have been captured for reference purposes. The benchmark dataset consists of the four remaining scenes "chalk", "fruits", "figures" and "rabbit".

Filename of the original scene:
<scene name>_<left lamp: off/on/filter>_<right lamp: off/on/filter>_<ambient light: off/on>_<exposure compensation: -1/0/+1>.png

Filename of the gray painted scene:
<scene name>_<left lamp: off/on/filter>_<right lamp: off/on/filter>_<ambient light: off/on>_<exposure compensation: -1/0/+1>_gray.png

Filename of the mask:
<scene>_mask.pbm

Download

The dataset is available here (MD5: f5e32c5f3943e5ca26a0e6f5dd22d046).

The project page with evaluation results and the code to obtain these results is Opens internal link in current windowhere.

The data was captured by Michael Bleier. If you use the dataset, please cite
Bleier, Michael; Riess, Christian; Beigpour, Shida; Eibenberger, Eva; Angelopoulou, Elli; Tröger, Tobias; Kaup, André: Color Constancy and Non-Uniform Illumination: Can Existing Algorithms Work? In: IEEE Color and Photometry in Computer Vision Workshop 2011.