Monday, November 24, 2014

Project: Comparing the Gamut display of Kindle Fire HD and HP laptop display

Motivation of the study
  • To learn more about color monitors
  • How to know if color monitors were compared, what are the comparison tests?
Introduction 
Color monitors have the capacity to display a wide range of color to be perceived by the eye. In trichromaticity, it is a property of human color perception, that only the three primary colors are used to replicate any color from the visible spectrum. When we look at the spectra in the visible spectrum, sometimes we cannot perceive the change in wavelengths of two colors for example red is perceived in the human eye from due to our eye's sensitivity it maybe perceived as the same color but, in truth, has a different spectra. The CIE Standard Colorimetric Observer are the color matching functions derived from human observers. [2]




A CIE chromaticity is shown below. It shows the range of colors the a color display can make and it is based on the standard colorimetric observer functions. These functions show the sensitivity of our eyes to color when observed from a specific angle on a screen. [2]

So how do we compute for the boundaries of the CIE xy tongue?  
Figure 1. CIE xy chromaticity space. Image from Wikipedia.org
We start by focusing on an object, lets say object Q. To compute for the coordinates we have[2]:
 where P is the spectral power distribution and X, Y, Z are the color matching functions for red, green and blue colors respectively. K is a normalizing function. [2]

And calculating X_Q, Y_Q and Z_Q we will now compute for the coordinates by solving for [1]:


The CIE xy chromaticity diagram is a plot of x_V and y_V. 
When P are the monochromatic wavelengths, from 380nm to 780 nm, i.e, if P(l) = 1*d(l) (Dirac delta with unit power at wavelength l), all the chromaticity points from 380 nm to 780 nm form the CIE xy diagram boundary, or the Gamut of all colors observed by the human eye. 
[2]


The Gamut of a color monitor is a polygon in the CIE xy chromaticity space showing the 
range of colors that can be reproduced by the device. The vertices of the gamut are computed from the primaries of the device. [3]

    For this project, gathered the spectroscopy data from my own laptop display which is a HP Pavillon dm1 laptop and my own Kindle Fire HD. These data were then used to plot the Gamut of their displays into the CIE xy chromaticity space. The Gamut polygon of these two were then compared to the CIE chromaticity space of a colloidal quantum dot light-emitting device(LED) display from V. Wood and V. Bulovic's paper entitled "Colloidal quantum dot 
light-emitting devices." [1]

Methodology
    The program that I used to calculate and plot all the data into the CIE chromaticity space is Scilab with the IPD - Image Processing Design toolbox and SIVP - Scilab Image and Video Processing toolbox added for better functions using images. The spectroscopy data was gathered using Ocean Optics spectrometer. The two displays used were an HP Pavilion dm1 laptop display and a Kindle Fire HD display.
    The boundaries of the CIE xy chromaticity space was plotted using the equations introduced. For every point I assumed a monochromatic laser light at the wavelengths starting from 380 nm to 780 nm with increments of 5 nm. Here is a flowchart of how my Scilab code computed for the Gamut polygon in CIE xy space. [2]

For the actual data calculation: First, I plotted the boundaries of the CIE xy chromaticity space boundaries by assuming that the spectral power distribution for monochromatic wavelengths for each wavelength is a Dirac delta, i.e P(l) = 1*d(l) where l is the wavelength from 380 nm to 780 nm, in the CIE chromaticity space [2]. Afterwards, I imported the data: color matching functions and spectroscopy data, which is in the format of csv. Then, I used these data to compute for the three-component vector ($X_V$,$Y_V$,$Z_V$) and finally the coordinates of the Gamut polygon on the CIE xy chromaticity space. I used GIMP to get the background of the corresponding colors perceived by the eye in the CIE xy chromaticity space.

Results and Discussion
      The color matching functions, together with the spectroscopy data of the two displays are shown in Figures 2 and 3.
Figure 2. (Left) Standard color matching functions and (right) HP laptop display spectral power distribution of its three color primaries.


Figure 3. (Left)Standard color matching functions and (right) Kindle Fire HD display spectral power distribution of its three color primaries.
The CIE xy chromaticity space coordinates and the Gamut polygon of the HP laptop display is shown on Figure 4.

Figure 4. Gamut polygon of a HP laptop display in CIE xy chromaticity space.


While for the Kindle Fire HD display the Gamut polygon is shown in Figure 5. Through inspection, we can compare that the HP laptop display has the greater Gamut triangle area compared to the Kindle Fire HD's Gamut triangle area. Since both were plotted in the same CIE xy chromaticity space we can say that the HP laptop has the higher color saturation and colors perceived by the eye in this display are better than that of the Kindle Fire HD.

Figure 5. Gamut polygon of a Kindle Fire HD display in CIE xy chromaticity space.


Comparing it to the CIE xy chromaticity space of a colloidal quantum dot where from the paper by V. Wood and V. Bulovic, we can see their colloidal quantum dots have a purer color perceived by the eye since its Gamut triangle is closer in the CIE xy chromaticity space boundaries. 

Figure 6. Chromaticity boundaries of colloidal Qdots in CIE xy chromaticity space. Image from [1].

We can see that the electroluminescence of the following colloidal quantum dot is near a Dirac delta of a monochromatic wavelength (Figure 7). The electroluminescence axis was normalized so it means that the intensity of the colors brought upon by the colloidal quantum dots has a near intensity of that of a monochromatic laser in the visible spectrum.

Figure 7. CIE xy chromaticity space of colloidal quantum dots. Image from [1].
Looking at the CIE xy chromaticity space of the colloidal quantum dots in Figure 6 we can see that its boundaries are very close to the boundary of the CIE xy chromaticity space. This means that we can perceive the color as greater than that of a standard HD TV. It also compared how close is the white produced by the colloidal quantum dot LED tv to sunlight. We can see that it is very close.

Conclusion

   Based on the results obtained using the standard color matching functions; the spectroscopy data of an HP laptop display and Kindle Fire HD.  
HP laptop display has bigger area of Gamut triangle compared with Kindle Fire HD.
Colloidal quantum dot-LED color monitors have better colors perceived by the eye due to its colors produced and electroluminescence data have a CIE xy chromaticity boundary that is near to the original CIE xy chromaticity diagram's boundaries.

References
[1] V. Wood and V. Bulovi c , \Colloidal quantum dot light-emitting devices,"Retrived from: http://www.nano-reviews.net/index.php/nano/article/view/5202/5767#F0003, 19 October 2014.
[2] J. Soriano, AP 186 manual - CIE xy Chromaticity Diagrams 2010, 2014.
[3] J. Soriano, AP 186 manual - Measuring the Gamut of Color displays and Prints, 2014.
[4] CIE xy chromaticity space, Retrived from: http://upload.wikimedia.org/wikipedia/commons/
thumb/3/3b/CIE1931xy_blank.svg/450px-CIE1931xy_blank.svg.png, 19 October 2014.
[5] Munsell Color Laboratory website authors, Useful color data, Retrived from: http://www.cis.rit.edu/research/mcsl2/online/cie.php , 19 October 2014.
[6] Cvrl.org website authors, Colour matching functions, Retrived from: http://www.cvrl.org/cmfs.htm, 19 October 2014.

Friday, October 31, 2014

Project update 3

My synthesis project of knowing the Gamut displays and plotting in CIE xy chromaticity space has come to a halting point. What to do? Let me enumerate my problems.

My power distribution versus wavelength, also known as spectroscopy data of my laptop display and Kindle display has come to a bump in the road. I need them to solve for the vertices of my Gamut polygon. 

Ma'am Jing suggested I use interpolation to reduce my data and I did. It worked. When I looked at my data, and used the function isnan() in Scilab, there were slots where it is true. 

So how can I reduce this NAN (which means not a number) in my data? Should I reduce my spectroscopy data using Excel? Or should I use the other interpolation methods like 'linear', 'spline', since I used 'nearest' in the interp1() function in Scilab.

Why should I use these methods? Will they yield fruitful results? I tried 'linear' and yes, there was an error: "Grid abscissae of dim 2 not in strict increasing order."

I think the 'nearest' method of interpolation yielded great results but it cannot plot the spectroscopy of the BLUE screen of my laptop display.

Here is the plot. The color of the graph corresponds to the red screen spectroscopy data and the green screen spectroscopy data.


I'm getting frustrated since I have to do the Kindle spectroscopy data, but once I solve this problem I can start my paper on this project.

Here is the color matching functions (left) and spectroscopy data of my laptop display (right):
It may look like they have the same data points but no, the spectroscopy data has a 1240x1 matrix for the red and blue screens, while 1238x1 matrix for the green screen. The color matching functions only have 471x1 matrix.
You can see from the data that the spectroscopy data and color matching functions are similar. Why? Because the color matching functions are like the sensitivity of the eye to color and they are based on standard observer data. 

Tuesday, October 28, 2014

Project update 2

I superimposed the CIE xy diagram from Wikipedia.org

CIE xy chromaticity diagram from http://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/CIE1931xy_blank.svg/450px-CIE1931xy_blank.svg.png

Plotted CIE xy chromaticity diagram superimposed with image from above.



Got problems with doing the Gamut polygon.

So I asked Ma'am Jing what to do so that the spectroscopy data I got has the same matrix size as the color matching functions of the CIE xy chromaticity diagram.

Sunday, October 19, 2014

Project update 1

My project in AP 186 knowing how good is the color display if you use quantum dots as your light-emitting devices instead of light-emitting diodes, or other devices.

How can I know how good is a color display? This is when the CIE xy chromaticity coordinates come through.

So first, how does the eye perceive color? We know that our color primaries (red, green, blue) can form a color and a mixture of this is a color pixel in a color display like a TV. The CIE color matching functions are like the spectral sensitivity of the eye to the color primaries. It is because they are derived from a human observer.

To do this I need to plot an CIE xy chromaticity diagram.
So I used these equations:

where K is a normalizing function, P is the power distribution of some object Q, and X,Y,Z are the color matching functions that I downloaded from the web. The power distribution that I used is the dirac delta function for every monochromatic wavelength from 380 nm to 780 nm with increments of 5nm. This is so that I can form the bounds of the CIE xy chromaticity diagram. [1]

Now you can get the CIE xy chromaticity coordinates using these equations[1]:



and 


I already have plotted the CIE color matching functions, and using Scilab and it is shown below.


And I plottted the CIE xy chromaticity diagram using equations 1-5.



Figure 1. Computed CIE xy tongue.

Figure 2. Reference CIE xy tongue from Wikipedia.org
Now I need to do the measuring the Gamut of color displays, so I can see if quantum dots as color display is better than an LED display.


References:
[1]J.Soriano, AP 186 manual - CIE xy Chromaticity Diagrams 2010, 2014.

[2]CIE 1931 color space, http://en.wikipedia.org/wiki/CIE_1931_color_space

For CIE xy tongue comparison map:
[3] http://upload.wikimedia.org/wikipedia/commons/thumb/3/3b/CIE1931xy_blank.svg/450px-CIE1931xy_blank.svg.png

For colloidal quantum dots:
[4] http://www.nano-reviews.net/index.php/nano/article/view/5202/5767#F0003

For the CIE's color matching functions:
[5] http://www.cvrl.org/cmfs.htm

Wednesday, October 1, 2014

AP 186 Activity 8 - Morphological operations part 2

Now we will use the applications of what we did in the post of Morphological operations part 1.

So we take an image of punched circles which are scanned in a flatbed scanner.


Now, I have to divide this images into 256x256 pixel subimages, with their filename in an increasing number, I named them C_01.jpg and so on. The first subimage is shown:


Now we made a histogram of this image so that we will know its threshold. Knowing this threshold will help us segment the image. The histogram is shown below.
And the threshold that I got is 211.83, and I used this value to SegmentByThreshold() function in Scilab. This is the result:

Now we have to clean the image, and we have three choices of morphological operators: CloseImage, OpenImage, and TopHat that are available in the IPD toolbox. I chose OpenImage and it is defined as "This function applies a morphological opening filter to an image. This filter retains dark objects and removes light objects the structuring element does not fit in. " which is suitable for the image. Now we need a structuring element and I used CreateStructureElement() function of a circle of size 11. The cleaned image is now:

Next we have to remove those circles which are overlapped. To do that we have to label each contiguous blob using SearchBlobs. Each connected blob will be replaced with a number. Then I have to filter them by size. We don't know what the sizes are so what I did was I marked all blobs per subimage then plotted the histogram of the sizes of the blobs from all the subimages.

From the histogram, I ignored the zero interval then chose the interval where it peaks. I chose the interval from 400 to 550 since that is where the peaks are. I used this interval to FilterBySize() function to separate the circles/cells which are overlapped. Then, I used a colormap to distinguish them uniquely by color.

I used these steps for all the subimages. I calculated the mean for each subimage and averaged these mean values. The standard deviation of the means was calculated from the averaged mean values.

Averaged mean: 532.10606
Standard deviation: 107.15587 

The standard deviation is used as the uncertainty and so the limits I will use for the size of a normal cell will be 532.10606 ± 107.15587 pixel area.

So now we proceed to isolate the abnormal sized cells using the knowledge of the best value of size of normal cell. We have an image of normal cells with cancer cells:
We have to use the steps I enumerated earlier:

1) Segment the image to emphasize cells.
2) Use morphological operation OpenImage() to clean the image with structuring element circle of size 13 since the SE earlier gives out overlapped blobs even after filtering it.
3) Uniquely mark the blobs. 
4) Use FilterBySize function to filter out overlapped cells. 

We will use the pixel area obtained from the subimages part. We will use the 532.10606 + 107.15587 pixel value as the lower limit of FilterBySize. This will store the blobs/cells with area higher than 532.10606 + 107.15587. It will not store the blobs lower than this value.

Here is the filtered circles with cancer image that are marked. 

Then, the inverted filtered image was convoluted with the original cancer image so that it will output an image that looks like the original image but is marked. The result is shown below:

The abnormal/cancer cells are marked and you can see that they are really bigger than the others.

I give myself a score of 10/10 since I completed this activity and did all that is required.

I had a difficult time with the cancer cells because it was not displaying the output I wanted. I had to change the structuring element so that it will not display the overlapped circles.

References:
[1] M. Soriano, AP 186 manual, A8 - Morphological operations, 2014.



Monday, September 29, 2014

AP 186 - A8 Morphological Operations part 1

Morphology refers to shape or structure. In image processing, classical morphological operations are treatments done on binary images, particularly aggregates of 1's that form a particular shape, to improve the image for further processing or to extract information from it. All morphological operations affect the shape of the image in some way, for example, the shapes may be expanded,
thinned, internal holes could be closed, disconnected blobs can be joined. In a binary image, all pixels which are “OFF” or have value equal to zero are considered background and all pixels which are “ON” or 1 are foreground.[1]

Before we go to the morphological operations, we need to define what is dilation and erosion of sets since we will use this later.
Erosion operator is defined by the equation[1]:
And what it does is shown below[1]:


Dilation operator is defined as[1]:
And what it does is shown below[1]:




I have to predict the resulting image if the following structuring elements are used
to a) erode and b) dilate the image input that will be enumerated later. The hand-drawn predictions are shown below and the structuring elements are enumerated below.
1. 2×2 ones
2. 2×1 ones
3. 1×2 ones
4. Diagonal [0 1; 1 0]

The inputs: 
1) 5x5 square





2) Triangle, base 4 boxes, height 3 boxes


3) 10x10 Hollow square



4) Plus sign, one box thick, 5 boxes along each line


Now these images are simulated in Scilab and we will see if our predictions match the simulated images. The simulated images are very small, the largest was 14 by 14 pixels in size so the images are blurry since I resized it. The functions that I used were CreateStructureElement() to make the structuring element which was shown in the images above as B, ErodeImage() to erode the images with the structuring element, and DilateImage() to dilate the images with the structuring element.
For the 5x5 square:
For the triangle, base 4 boxes, height 3 boxes



For the 10x10 hollow square,




For the  plus sign, one box thick, 5 boxes along each line

As you can see, every one of my predictions was right and it resembles the simulated images using Scilab.



References:
[1] M. Soriano, AP 186 manual, A8 - Morphological operations, 2014.

Monday, September 22, 2014

AP 186 - Activity 7 Image Segmentation


Image segmentation is segregating the image using a particular region in your image as a reference. An example is this picture below where you want to emphasize the rust-colored object below. You cannot use  grayscale to segregate the area, since it has the same color as the area you don't need.  
Figure 1. (Left) Original image, (right) grayscale of original image.[1]

In grayscale this is easy to do. For example, given this image below:
Figure 2. Check image in grayscale.[1]

and when you use thresholding you get the segmentation of the image easily in Scilab. 
Figure 3. Segemented image of grayscale check.[1] 
Figure 4. Histogram of check image.

So what will we do to segment the image in color? We know that 3D objects even in one color have a variety of shading. This means that the object is subject to shadows so there will be different brightness levels of the same color. We will use the normalized chromaticity coordinates. What is that? 
For example, a red ball such as in Figure 5 will appear to have various shades of red from top to bottom. 
Figure 5. Simulated 3D red ball [1] .

For this reason, it is better to represent color space not by the RGB but by one that can separate brightness and chromaticity (pure color) information. One such color space is the normalized chromaticity coordinates or NCC. Per pixel, let I = R+G+B. Then the normalized chromaticity coordinates are
r = R / (R+G+B) = R / I
g = G / I
b = B / I
We note that r+g+b = 1 which implies r, g and b can only have values between 1 and 0 and b is dependent on r and g. Therefore, it is enough to represent chromaticity by just two coordinates, r and
g. [1]

We have thus reduced color information from 3 dimensions to 2. Thus, when segmenting 3D objects it is better to first transform RGB into rgI.
Figure 6. Normalized chromaticity coordinates. [1]
Figure 6 shows the r-g color space. When r and g are both zero, b = 1. Therefore, the origin corresponds to blue while points corresponding to r=1 and g=1 are points of pure red and green, respectively. 

Segmentation based on color can be performed by determining the probability that a pixel belongs to a color distribution of interest. We have two methods: parametric probability distribution estimation and non-parametric probability distribution estimation. 

First, we use the method of parametric estimation. So we take a picture from the internet which are colorful objects. Then we take the part of the picture we are interested in. This region of interest that I picked is the red pompom, and I cropped it out. 
Figure 7. Colorful yarn pompoms. [3]

Figure 8. Region of interest which is the red pompom. [3]
        We need this ROI so that we will compute its histogram. We get the histogram by normalizing by the number of pixels. This result is already the probability distribution function (PDF) of the color. To tag a pixel as belonging to a region of interest or not is to find its probability of belonging to the color of the ROI. Since our space is r and g we can have a joint probability p(r) p(g) function to test the likelihood of pixel membership to the ROI. We can assume a Gaussian distribution independently
along r and g, that is, from the r-g values of the cropped pixel we compute the mean r and mean g and standard deviation of r and g from the pixel samples. The probability that a pixel with chromaticity r belongs to the ROI is then 



This equation was also used to get the probability of pixel with chromaticity g. The joint probability is taken by the product of the probability of chromaticity r and probability of chromaticity g. [1]

So first, I had to compute the mean of r and g of the cropped picture and then the standard deviation of r and g of the cropped picture. I will then use the mean r and g, standard deviation of r and g into the above equation. I will get two probabilities, one a function of r and the other a function of g. I get the product of the two and I will get the segmented image which is based on red. 

Figure 9. Segmented image based on red ROI.


Figure 10. Side by side, (left) segmented and (right) original image.

Now we will use the non-parametric estimation.

In non-parametric estimation, the histogram itself is used to tag the membership of pixels. Histogram backprojection is one such technique where based on the color histogram, a pixel location is given a value equal to its histogram value in chromaticity space. This has the advantage of faster processing because no more computations are needed, just a look-up of histogram values. [1]

 This was done by first looking for the 2D histogram of the ROI which is again Figure 8. So its 2D histogram looks like this:
Figure 11. Rotated 2D histogram of ROI (left) and normalized chromaticity coordinates (NCC) (right).

The 2D histogram was rotated because its position with respect to the NCC was off by -90 degrees. You can see that the white parts correspond to the red parts in the NCC. Now we have to do histogram backprojection to the image. 

Figure 12. Segmentation using histogram backprojection. 

Figure 13. Histogram backprojection of original image using red ROI (left) and original image (right).

We can see that the lightest ball is the red yarn pompom and our region of interest. You can see that some parts of the orange yarn pompom matches the histogram of the red pompom. 

Comparing the two methods, I prefer the histogram backprojection since it focuses on the color of your region of interest compared to the parametric estimation where more colors are segmented that wasn't in the region of interest. In histogram backprojection, you can see that there is only a little bit of segmentation compared to that of the parametric method. But if you want the segmentation of your images to have a sharper edge, I recommend using parametric estimation. It really depends on your needs, if you want specific color image segmentation chose histogram backprojection. If you want a cleaner edge and clear area of segmentation go for parametric estimation. I also want to note that the parametric estimation is faster in processing the image outputs while in histogram backprojection it took 45 seconds longer for the image outputs to process. I still prefer the histogram backprojection.

Figure 14. (Left) Parametric



Using another image and using the parametric and non-parametric estimation/histogram backprojection: 
Figure 14. Colorful yarn balls. [2]

Figure 15. Green region of interest. [2]

Figure 16. Segmented image using parametric estimation.

Figure 17. Side by side, (left) segmented using parametric estimation and (right) original image.


Figure 18. Segmentation by histogram backprojection.

Figure 19. Side by side, (left) segmented using histogram backprojection and (right) original image.

As you can see, the histogram backprojection did a better job of segmenting the ROI which is in this case the green yarn ball in the middle row. In parametric segmentation, the bright light blue ball, light green yarn ball below the ROI were also included in the segmentation of the image.  This shows that the non-parametric estimation is better than the parametric estimation if you want the region of interest to be segmented only.


If I will score myself 10/10 for doing this activity and segmenting another image. This activity was very fun since we now have to deal with colors. I had a lot of fun figuring out how to do the activity and looking for the colorful pictures.




References:
[1] M. Soriano, AP 186 manual, A7 - Image Segmentation, 2014.
[2] "The Colorful White: Colorful Yarn Balls", Retrived from: http://thecolorfulwhite.blogspot.com/2012/11/colorful-yarn-balls.html
[3] "The Colorful White: Colorful Yarn PomPoms", Retrived from:  http://thecolorfulwhite.blogspot.com/2012/12/colorful-yarn-pompoms.html