Color constancy

Introduction

Color constancy is the ability of the human visual system that ensures the perception of the color of objects to remain relatively constant under varying illumination conditions. While there are some similarities between human vision color constancy (HVCC) and computer vision i.e. computational color constancy (CVCC), these two also differ significantly. This page contains material and resources related to computational color constancy. In achieving computational color constancy it is necessary first to estimate the scene illumination and then to perform the chromatic adaptation, which adjusts the scene colors so that they look as they would under a desired illumination (mostly daylight). If it is assumed that the scene illumination is uniform, then the global illumination can be estimated for the whole scene. However, if this is not the case, then the illumination has to be estimated locally.

Here we offer the source code of our illumination estimation, tone mapping, and brightness adjustment methods. The code available here is research code and is therefore only of prototype quality.

The links to the papers that describe all these methods are available together with their brief descriptions that follow. Their pre-print versions are also available at the bottom of this page.

Additionally, here you can download the Cube and Cube+ benchmark datasets for illumination estimation.

The Cube+ dataset

     

     

The Cube+ dataset is an extension of the Cube dataset described below. It contains all 1365 images of the Cube dataset and additional 342 images. The new images include indoor images but also nighttime outdoor images. The main reason for extending the dataset was to include more diversity to the ground-truth illuminations and thus to make the newly extended dataset more challenging than the original Cube dataste. The number of new illuminations was set to make the overall distribution of illuminations in the extended dataset similar to the one found in the NUS datasets.

The calibration object put in the image scenes and the camera used the take the images are the same ones that were used to create the Cube dataset. The taking of new images and determining their ground-truth illumination was carried out by following the same methodology that was used when creating the Cube dataset. For more information on black level subtraction and clipped pixels removal, the same instructions that were given for the Cube dataset apply here.

The Cube+ dataset includes raw CR2 images, minimally processed PNG images, and JPG images. The main part are the PNG images, which were obtained from raw images by using dcraw with options -D -4 -T and then applying only simple subsampling for debayering.

You can download the Cube+ dataset here. The file names are not zero-padded, which may cause confusion with the image ordering. The first line in the ground-truth file(s) corresponds to the image stored in 1.png, the second line corresponds to the image in 2.png, etc. For any questions do not hesitate to send an e-mail to nikola.banic@fer.hr.

The Cube dataset

     

     

The Cube dataset contains 1365 exclusively outdoor images taken with a Canon EOS 550D camera in parts of Croatia, Slovenia, and Austria during various seasons. The image ordering with respect to their creation time has been shuffled. In the lower right corner of each image the SpyderCube calibration object is placed. Its two neutral 18% gray faces were used to determine the ground-truth illumination for each image. Due to the angle between these two faces, for images with two illuminations, e.g. one in the shadow and one under the direct sunlight, it was possible to simultaneously recover both of them and they are provided for each image. In all dataset images with two distinct illuminations, one of them is always dominant so that the uniform illumination assumption effectively remains valid. To correctly identify the dominant illumination, for each image its two possible chromatically adapted versions were manually checked and after this has been done for all images, the final ground-truth illumination was created. The black level, i.e. the intensity that has to be subtracted from all images in order to use them properly, equals 2048. To make a conclusion about the maximum allowed intensity values of non-clipped pixels in the dataset images, histograms of intensities for various images were observed. If m is the maximum intensity for a given dataset image in any of its channels, then the best practice is to discard all image pixels that have a channel intensity that is greater than or equal to m-2. Finally, before an image from the dataset is used to test the accuracy of an illumination estimation method, the calibration object has to be masked out to prevent a biased influence. A simple way to do this is to mask out the lower right rectangle starting at row 1050 and column 2050. A MATLAB script for all these operations is available at the bottom of this page.

The Cube dataset includes raw CR2 images, minimally processed PNG images, and JPG images. The main part are the PNG images, which were obtained from raw images by using dcraw with options -D -4 -T and then applying only simple subsampling for debayering.

You can download the Cube dataset here. The file names are not zero-padded, which may cause confusion with the image ordering. The first line in the ground-truth file(s) corresponds to the image stored in 1.png, the second line corresponds to the image in 2.png, etc. For any questions do not hesitate to send an e-mail to nikola.banic@fer.hr.

CroP: Color Constancy Benchmark Dataset Generator

The Croatian Paper (CroP) is a simulator that for a given camera sensor enables generation of any number of realistic raw images taken in a subset of the real world, namely images of printed photographs. Datasets with such images share many positive features with other existing real-world datasets, while some of the negative features are completely eliminated. CroP is described in the paper "CroP: Color Constancy Benchmark Dataset Generator" that is currently available at arXiv. The source code and the data for generation are available here. For any questions do not hesitate to send an e-mail to nikola.banic@fer.hr.

Misuse of the Color Checker dataset

The Color Checker dataset has a long history of technically incorrect usage, which resulted in several versions of its

experimental and ground-truth illumination data to be put in circulation. While there are publications that allegedly solve this problem, they actually only introduce new problems and cause additional confusion. Other problems with this dataset include usage of multiple sensors instead of only one, violation of uniform illumination assumption on many occasions, and unclear data splitting. Despite these technical shortcomings, the Color Checker dataset is still being used in various forms and many reviewers usually exercise pressure on authors to nevertheless use this dataset in the experiments. To highlight all these justified grievances and raise awareness of them, they were all put at one place in the paper "The Past and the Present of the Color CheckerDataset Misuse" that is currently available at arXiv.

Color Beaver

The Color Beaver is a simple but efficient extension to any illumination estimation method that under certain conditions

alters the results of the underlying method. It is based on the observation that Canon's cameras are limiting its illumination estimations so that they do not appear outside of a polygon similar to a parallelogram. The Color Beaver uses a simple genetic algorithm to learn a new bounding polygon for a specific camera sensor and limit the illumination estimations of any underlying method to a certain area in chromaticity space which produces more accurate estimations then cameras white balancing system. It is explained in more detail in the VISAPP 2019 paper "Color Beaver: Bounding Illumination Estimations for Higher Accuracy", which is available here.

Blue shift assumption

When a statistics-based illumination estimation method has to be used on a single given image with no other images from the same sensor available, it can be hard to automatically determine which parameters could result in the highest accuracy. The blue shift assumption is an empirical assumption that assumes that from the illumination estimations obtained by the method for several parameter values, the one with the second smallest red chromaticity should be chosen. It is explained in more detail in the VISAPP 2019 paper "Blue Shift Assumption: Improving Illumination Estimation Accuracy for Single Image from Unknown Source", which is available here.

Green stability assumption

Although in most on illumination estimation statistics-based methods are treated as if they require no learning, they do have parameters whose tuninig affects the methods' accuracy. If there is no calibrated dataset with known ground-truth illuminations available, the tuning cannot be done by minimizing the angular error. The green stability assumption is a heuristic that enables the tuning of parameter values in cases where only a non-calibrated dataset is available. It chooses the parameters that for a given dataset minimize the standard deviation of the green chromaticity components of illumination estimations on the dataset's images. The experimental results show that this gives high accuracy. The code for recreating the numerical results from the arXiv pre-print is available at the bottom of the page.

Unsupervised learning for color constancy

Creating calibrated datasets for illumination estimation methods is a time-consuming process, which involves a

significant amount of manual work and preferrably, but not necessary, it is done for each sensor individually. This allows state-of-the-art learning-based illumination estimation methods to fine tune their parameter values and to achieve high accuracy. However, if the ground-truth is not available, the learning-based methods cannot be easily trained. A solution is to use some kind of unsupervised learning with one of the examples being the Color Tiger method proposed in our VISAPP 2018 paper. Its extended version is publicly available as an arXiv pre-print and it also describes the Color Bengal Tiger method for unsupervised learning for inter-camera color constancy. The paper also describes the Cube dataset. Additionally, at the bottom of this page in the part with the source code you can find the Matlab code for reproduction of all results from the paper.

Flash and Storm

Flash and Storm is a tone mapping operator based on an adjusted version Naka-Rushton equation. Flash is global and Storm is a local tone mapping operator. Both of them are designed to be hardware-friendly and to have a low complexity while simultaneously producing high-quality results. They are explained in detail in VISAPP 2018 paper "

Flash and Storm: Fast and Highly Practical Tone Mapping based on Naka-Rushton Equation", which is available here.

Puma

The Puma algorithm is a tone mapping operator based on an adjusted version Naka-Rushton equation and the Smart

Light Random Sprays Retinex algorithm. It is explained in detail in the EUSIPCO 2016 paper "Puma: A High-Quality Retinex-Based Tone Mapping Operator", which is available here.

Smart Light Random Memory Sprays Retinex

Smart Light Random Memory Sprays Retinex is an image enhancing program local brightness adjustment and color correction. It is relatively fast and it produces images of high quality. It is based on the Light Random Sprays Retinex algorithm, but many of its flaws and weaknesses are fixed. The paper which describes has been accepted for publication in the Journal of the Optical Society of America A. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Firefly

The Firefly is a brightness adjustment algorithm designed to be very fast, suitable for hardware implementation, and to produce high-quality results. The paper which describes it has been accepted for publication as part of ICIP 2015. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Color Ant

The Color Ant algorithm is a relatively simple learning-based algorithm that uses the k-NN algorithm in its core to perform illumination estimation. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.

Smart Color Cat

The Smart Color Cat algorithm is a learning-based algorithm that represents the upgrade of the Color Cat algorithm. It uses simpler features and it can be trained and tested significantly faster. The paper "Using the red chromaticity for illumination estimation" which describes it has been accepted for publication as part of ISPA 2015.

Color Dog

The Color Dog algorithm is a learning-based algorithm that alters the illumination estimation of other methods by using the information available from the illumination chromaticity distribution. Even though relatively simple, the method outperforms most other methods on any kind of images. It is explained in detail in the VISAPP 2015 paper "Color Dog: Guiding the Global Illumination Estimation to Better Accuracy" which is about to be published.

Color Cat

The Color Cat algorithm is a learning-based algorithm that uses color distribution to perform illumination estimation. It is explained in detail in the IEEE Signal Processing Letters paper "Color Cat: Remebering Colors for Illuminiation Estimation", which can be downloaded here.

Color Badger

The Color Badger algorithm is an extension and improvement of the Light Random Sprays Retinex (LRSR) algorithm intended to overcome LRSR's weaknesses in tone mapping. Additionally, it can also be used as a local white balancing algorithm. It is explained in detail in the ICISP 2014 paper "Color Badger: A Novel Retinex-based Local Tone Mapping Operator", which can be downloaded here. The OpenCV C++ implementation of the TMQI that was used in the testing is also available.

Color Rabbit

The Color Rabbit algorithm is essentially a modification and a more accurate upgrade of the Color Sparrow algorithm. It is explained in detail in the DSP 2014 paper "Color Rabbit: Guiding the Distance of Local Maximums in Illumination Estimation", which can be downloaded here.

Improved White Patch

The Improved White Patch algorithm is an improvement of the White Patch algorithm by means of image pixel sampling. It is of greater accuracy but still of the same speed as the White Patch algorithm. The paper in which it is explained in detail was presented at ICIP 2014 and it can be downloaded here.

Light Random Sprays Retinex

The Light Random Sprays Retinex algorithm is an extension and improvement of the Random Sprays Retinex algorithm, which allows a much lower execution time and higher resulting image quality. It is explained in detail in the IEEE Signal Processing Letters paper "Light Random Sprays Retinex: Exploiting the Noisy Illumination Estimation", which can be downloaded here. Both the the C++ source code and an HTML + JavaScript demonstration are available at the bottom of the page.

Color Sparrow

The Color Sparrow algorithm is essentially a derivative of the Random Sprays Retinex, but it is nevertheless as fast as other well-known global illumination estimation algorithms. It is explained in detail in the 2nd Croatian Computer Vision Workshop paper "Using the Random Sprays Retinex algorithm for global illumination estimation", and which can be downloaded here.


Publications

Source code