Histograms are kind of a big deal when it comes to digital image processing. By definition, a histogram is a bar graph that counts the occurrence of values within a certain range. It is widely used in statistics as well, with one example being the so-called Population Pyramid, which is a back-to-back histogram that displays the distribution of a population in all age groups1.
When it comes to digital image processing, histograms will take on a different role. They will, instead, count the number of pixels in a certain tonal value or range. For color images, histograms are usually plotted separately for each color channel – red, green, and blue. For each of these channels, the tone intensity will vary from 0 to 255. Consider the picture below:
It’s histogram is:
Histogram Equalization
Histogram equalization is a method that improves the contrast of an image by spreading out the most frequent intensity values across the histogram. Through equalization, areas of lower local contrast can gain a higher contrast and be better seen.
This is especially useful in medical imaging, where x-ray images can be better viewed with a higher contrast overall, and in photos that are over or under-exposed3.
Histograms in OpenCV
OpenCV has its own histogram calculation function, called calcHist
. In our example, we’ll use it to calculate the histograms of both grayscale and color images. The calcHist
function signature is:
calcHist( const Mat* images,
int nimages,
const int* channels,
InputArray mask,
OutputArray hist,
int dims,
const int* histSize,
const float** ranges,
bool uniform = true,
bool accumulate = false )
Where,
images
are the source arrays, which have the same depth and an arbitrary number of channels.nimages
represents the number of source images.channels
specifies the list of the dimensions channels used to compute the histogram.mask
is an optional mask.hist
being an output parameter, is the array where the histogram will be stored.dims
is the histogram dimensionality – should be positive and not greater thanCV_MAX_DIMS
.histSize
is the array of histogram sizes in each dimension.ranges
contains the array of tonality range for the histogram in each dimension.uniform
specifies whether the histogram is uniform or not.accumulate
specifies whether the histogram is cleared in the beginning when it is allocated.
Application 1: Equalizing Histograms
Our first example will be normalizing the histogram of the image captured by the computer’s camera. For this one, we will use, for the first time, the VideoCapture
element in OpenCV, which allows us to capture images from our webcam.
So let’s get to it. First of all, we will introduce the variables our problem requires.
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
Mat hist, histeq;
int nbins = 64;
float range[] = { 0, 256 };
const float* histrange = { range };
bool uniform = true;
bool accumulate = false;
...
}
hist
and histeq
will store the histogram and the equalized histogram of our captured image, respectively. nbins
is the number of buckets in our histogram. In our case, our histogram will contain 64 buckets, and each bucket will store the number of pixels for tonalities n through n + 4. For example, the first histogram bar will count pixels in the tonalities 0, 1, 2, and 3, and the second bar will count pixels in the tonalities 4, 5, 6, and 7, and so on. histrange
stores the tonality range we want to capture for a dimension – 0 through 256, in our case, keeping in mind that the upper boundary is exclusive, so 0 through 255 in practice. uniform
and accumulate
are just flags that we set to their respective default values.
Then, we initialize our capture element and some of the variables we declared:
...
cap.open(0);
if (!cap.isOpened())
{
cout << "[ERROR] Camera Unavailable...";
return -1;
}
cap.set(CAP_PROP_FRAME_WIDTH, 640);
cap.set(CAP_PROP_FRAME_HEIGHT, 480);
int width = cap.get(CAP_PROP_FRAME_WIDTH);
int height = cap.get(CAP_PROP_FRAME_HEIGHT);
cout << "width = " << width << endl;
cout << "height = " << height << endl;
int histw = nbins, histh = nbins / 2;
Mat histDisplayImg(histh, histw, CV_8UC1, Scalar(0, 0, 0));
Mat histeqDisplayImg(histh, histw, CV_8UC1, Scalar(0, 0, 0));
...
The open
method in VideoCapture
will require an index that points to the camera hardware we want to capture from. Since my computer only has one camera attached to it, I will go with the first one (0). You may want to figure out which camera to use and what index it has, if you have several cameras.
If it is not possible to open a camera at that position, we display an error message and exit the program.
Then we set a height and width for our capture frame – 640 by 480, in our case – and display that information to the user. We will also display the histogram to the user on a little embedded graph on top of our captured image and, for that, we will use histDisplayImg
and histeqDisplayImg
, which will store the displayed histogram for the original capture and the equalized capture, respectively. These images will be 64 by 32. CV_8UC1
defines the matrix will store 8-bit unsigned values in one channel only.
Then we start our loop to capture images from our camera.
The image
and equalized
Mat objects will store the actual images captured by the camera, one being the original one and the other being the equalized one. key
will capture key presses just to allow us to exit our infinite loop.
Inside the loop, there are lots of things happening. First, we capture the image and store it in image
. The cool thing about the VideoCapture
class is that it implemented the >>
operator, so it’s really quick and easy to do that.
Then we convert the color image that comes from the camera into a grayscale one with the cvtColor
function. The COLOR_BGR2GRAY
flag explicitly specifies that we want to convert from BGR (OpenCV standardized on the BGR approach instead of RGB, which is no big deal, just keep in mind that the order is reversed) to grayscale. Then we equalize the histogram by calling the function equalizeHist
, which only takes as parameters the source and the destination images.
...
Mat image, equalized;
int key;
while (1)
{
cap >> image;
cvtColor(image, image, COLOR_BGR2GRAY);
equalizeHist(image, equalized);
...
Then we calculate the histograms of both the original and the equalized images using calcHist
and normalize both histograms so that they fit inside the little box we will display them on, which is 32 pixels high, using the normalize
function.
...
calcHist(&image, 1, 0, Mat(), hist, 1, &nbins, &histrange, uniform, accumulate);
calcHist(&equalized, 1, 0, Mat(), histeq, 1, &nbins, &histrange, uniform, accumulate);
normalize(hist, hist, 0, histDisplayImg.rows, NORM_MINMAX, -1, Mat());
normalize(histeq, histeq, 0, histeqDisplayImg.rows, NORM_MINMAX, -1, Mat());
...
We will then set all of the values inside our little histogram display box to 0 with the setTo
function. By doing that, we define that the background of the graph will effectively be black.
...
histDisplayImg.setTo(Scalar(0));
histeqDisplayImg.setTo(Scalar(0));
...
At this point, we can create our graphs by iterating each bar of the histogram. Keep in mind we are not actually calculating the histogram here, since we have already done that. We are simply pulling the values we calculated and plotting a bar graph with them. So, for each of the 64 bars, we will create a vertical line that will represent this bar in each of the histograms – the original and the equalized – that starts at position (i, histh)
, or the bottom at column i, considering that the point (0, 0) is the upper-left corner, and ends at the height histh
minus the histogram value at i
. We used the function cvRound
to convert the float value output by at
into an int.
...
for (int i = 0; i < nbins; i++)
{
line(histDisplayImg,
Point(i, histh),
Point(i, histh - cvRound(hist.at<float>(i))),
Scalar(255, 255, 255), 1, 8, 0);
line(histeqDisplayImg,
Point(i, histh),
Point(i, histh - cvRound(histeq.at<float>(i))),
Scalar(255, 255, 255), 1, 8, 0);
}
...
After building our histogram displays, we copy them into both the original and the equalized images so they are displayed “on top” of them, on the upper-left corner. To achieve that, we use the copyTo
function, where we explicitly define the rectangular position where they will be copied to, which, in that case, is at position (0, 0) with the width nbins
and the height histh
.
...
histDisplayImg.copyTo(image(Rect(0, 0, nbins, histh)));
histeqDisplayImg.copyTo(equalized(Rect(0, 0, nbins, histh)));
...
Then we show both images in two separate windows and listen for a keypress. If we press the escape key at any moment, we quit the program.
...
imshow("Not Equalized", image);
imshow("Equalized", equalized);
key = waitKey(30);
if (key == 27) break;
}
return 0;
Notice how the second histogram is much more evenly distributed than the first one. That is what histogram equalization achieves in practice.
Application 2: Detecting Movement
Another cool application where histograms can be used is movement detection. You can notice how the histogram changes whenever a new element is added to the picture, from our first example. If only there was a way to calculate a histogram difference whenever we have two versions of an image just to see how much it changed.
But there is! And there is even an OpenCV function just to achieve that. It’s called compareHist
. We won’t get into the deep details of this function, but just know that it will take as parameters two histograms and a comparison method. OpenCV, as of version 4.4.0, supports four different types of comparisons. You can find their details and mathematical models in the available comparison methods documentation here.
The code itself will change by very little compared to the first application. In this second application, though, we will deal with color images instead of grayscale. Because of that, we need to have a way to separate the three color channels – red, green, a blue – so that we can calculate their histograms separately. Since our application will be pretty simple, we will only calculate the histogram for the red channel. You can calculate for all three of them, the process is the same.
Also, since we are comparing two histograms, we will do two captures per loop cycle. So, whenever there is movement, we will be able to detect it just by comparing these two frames.
This is what our code looks like before we enter the loop:
#include <iostream>
#include <opencv2/opencv.hpp>
using namespace std;
using namespace cv;
int main()
{
VideoCapture cap;
vector<Mat> planes;
Mat histR1, histR2;
int nbins = 64;
float range[] = { 0, 256 };
const float* histrange = { range };
bool uniform = true;
bool acummulate = false;
cap.open(0);
if (!cap.isOpened())
{
cout << "[ERROR] Camera Unavailable...";
return -1;
}
cap.set(CAP_PROP_FRAME_WIDTH, 640);
cap.set(CAP_PROP_FRAME_HEIGHT, 480);
int width = cap.get(CAP_PROP_FRAME_WIDTH);
int height = cap.get(CAP_PROP_FRAME_HEIGHT);
cout << "width = " << width << endl;
cout << "height = " << height << endl;
int histw = nbins, histh = nbins / 2;
Mat histImgR(histh, histw, CV_8UC3, Scalar(0, 0, 0));
Mat image;
int key;
...
}
The only new thing we see so far is the planes
variable, which is where we will store the three color channels separately. The rest is pretty much like the first example.
Once inside the loop, we will capture the first image, split its channels into the planes variable using split
, and calculate and normalize its histogram using calcHist
and normalize
.
Then we proceed to doing the same with the second capture.
...
while (1)
{
cap >> image;
split(image, planes);
calcHist(&planes[2], 1, 0, Mat(), histR1, 1, &nbins, &histrange, uniform, acummulate);
normalize(histR1, histR1, 0, histImgR.rows, NORM_MINMAX, -1, Mat());
cap >> image;
split(image, planes);
calcHist(&planes[2], 1, 0, Mat(), histR2, 1, &nbins, &histrange, uniform, acummulate);
normalize(histR2, histR2, 0, histImgR.rows, NORM_MINMAX, -1, Mat());
...
We, then, build the histogram display object and embed it into the final image using the same strategy as the previous example. Only, this time, we create our lines red to match the channel color we’re displaying. Then we copy that into the final image.
...
histImgR.setTo(Scalar(0));
for (int i = 0; i < nbins; i++)
{
line(histImgR,
Point(i, histh),
Point(i, histh - cvRound(histR2.at<float>(i))),
Scalar(0, 0, 255), 1, 8, 0);
}
histImgR.copyTo(image(Rect(0, 0, nbins, histh)));
...
Then we compare the first histogram with the second. When we calculated the histograms, we stored them inside histR1
and histR2
after the first and second capture, respectively. So we just need to call compareHist
and we’re good to go.
...
double comparison = compareHist(histR1, histR2, HISTCMP_CHISQR_ALT);
cout << comparison << endl;
if (comparison > 60)
{
circle(image, Point(width / 2, height / 2), 50, Scalar(0, 0, 255), 5, 8, 0);
}
...
I defined the comparison method empirically, which means I just tested all of them out and chose the one that varied the most whenever there was a movement in the image. Then I saw by which value the change should be so that there was significant movement, which is where the 60 comes from. Whenever that happens, we display a red circle in the middle of the image.
Finally, we wrap up the loop by actually displaying the image and listening for the escape key.
...
imshow("Motion Detector", image);
key = waitKey(30);
if (key == 27) break;
}
return 0;
And this is the final product:
Final Thoughts
Though not many of them. I hope this has been useful.
Make sure to check the previous posts in the series:
And as I have already quite awkwardly gestured in the previous GIF: Bye! 🙂