Tuesday, April 10, 2012

Color-Image Segmentation

One of the biggest hurtles I'm encountering in my senior design project right now is separating the person from everything else. The paper I read today discusses that very problem in excruciating detail. Let me tell you, it's not as easy as I thought, but if my current idea fails, I might just fall back onto some of the concepts this paper discusses. This paper looks at separating objects from each other by looking at the problem from a classification and clustering point of view. In other words, it takes the values of the pixels and groups them together based on their similarities.

By itself, looking at color similarity can be very flawed. The same color can appear in several different part of the picture. Also, given different lighting, certain colors can appear as different colors that might show up elsewhere. In this case, you have different partitioning results in different lighting. In order to address this problem, the researchers in the paper look, not only to the RGB color values, but to the location of that value within the picture. The assumption is that if similar colors exist in close proximity, it is likely the same object. You also have to consider noise. With a single pixel being so tiny, it's not uncommon to have a single pixel be an abnormal color from its neighbors. To ensure correctness, they find "core" pixels. They do this simply by looking at the neighboring pixels, and see if a minimum number of neighboring pixels are similar enough to the chosen one. You then take those and cluster them based on both proximity and color value, and determine what is and isn't the same object.

The paper also include "fuzzy mode detection." I have to be honest with you. I do not know what the paper is talking about during this part. What is a "mode?" If I could answer that question, maybe I'd be able to follow this section of the paper better. Sorry guys.

Source:
Losson, O., Botte-Lecocq, C., & Macaire, L. (2008). Fuzzy mode enhancement and detection for color image segmentation. Journal on Image and Video Processing - Color in Image and Video Processing , 1-19.
http://dl.acm.org/citation.cfm?id=1362851.1453693&coll=DL&dl=ACM&CFID=96602743&CFTOKEN=40588068

Thursday, April 5, 2012

Artificial High-Resolution

Recently in my digital photography class I learned about HDR pictures. Specifically how to combine pictures with different exposure values to create a picture similar to what your eye sees. I admit, I never thought the same ideas would apply in a research paper I read for my Computer Science senior design class. Instead of combining pictures to get all the detail from multiple exposure values, the paper discusses getting a high resolution picture using a collection of low-resolution pictures. The idea is pretty much the same though. Since each picture will have different pixels with good information, you take the good information from each picture and add it to the final product. This seems like  a fairly intuitive approach to the topic. However, the technique is not without it's obstacles. You see, in photography you don't always get the exact same picture when you press the button a second time. Something in the scene might change. For example, your angle to the object might be different ever so slightly. The background might change or move, especially if there are creatures in the background. A direct merge of these pictures would result in very messy final picture. Not exactly the "super resolution" you're looking for.

The paper is actually about addressing this obstacle. They approach the problem understanding there may be subtle differences in the different picture, and bring in the idea of error. They create a curve based upon all the pictures and then assign error weights based upon how far the value of a pixel is from the curve. The farther the pixel value is, the smaller weight it has. The assigned weights are based upon an outlier threshold determined at the creation of the curve. These weights allow for the final picture to partially ignore, or even exclude irrelevant information from the final picture. The results of this method are a crisp picture that exclude extra data, including  extra objects that may be placed in one of the contributing pictures.

Source:

El-Yamany, N. A., & Papamichalis, P. E. (2008). Robust Color Image Superresolution:. Journal on Image and Video Processing - Color in Image and Video Processing , 1-12.