Sunday, October 11, 2009

Activity 19: Restoration of blurred image

I applied motion blur on the image using the algorithm given in the procedure. I then implemented the restoration algorithm. Here are the results:


Original


Noised / Restored

The restoration was able to remove the "trace" effect of the blur but some additive artifacts still remain. The method is not in vain however, since it still was effective enough to make the text readable again.

I give myself a 10/10 because I was able to implement the algorithms successfully.

Activity 18: Noise model and basic image restoration

In this activity we explore the different type of image noise and try some techniques for cleaning them. I applied 5 types of noise: impulse, uniform, exponential, gamma and gaussian. I then implemented 4 methods of noise removal: harmonic, geometric, contraharmonic and arithmetic. Here are the results:



Original image







From top to bottom:
salt and pepper / uniform / exponential / gamma / gaussian
From left to right:
arithmetic / contraharmonic / geometric / harmonic / noise only

The images get darker/brighter because I normalized them after adding the noise. For most types of noise, the arithmetic restoration yields the best results. Also, the other methods which rely on multiplication fail at places where the noise peak.

I give myself 8/10 because i wasn't able to implement the Rayleigh noise.

Activity 17: Photometric stereo

In this activity we reconstructed a 3d object using image data taken from multiple angles. The source images were already given to us and so all we had to do was to write the code to implement the equations. I came up with the following shape:

I give myself a 10/10 because I successfully constructed the 3d object.

Activity 15: Probabilistic Classification

This activity is an extension of activity 14. It uses LDA instead of just taking the euclidian distance in feature space. I used the same photos and features as in activity 14 but i only made use of two classes: leaf 1 and leaf 2. I simply followed the outlined LDA procedure in the link attached with the protocol and my program was able to distinguish between the two leaves for all of 4 test objects.



Classes

f1 =
- 1.3203592
12.37927
36.708249
27.447476

f2 =
- 2.9090572
10.544618
38.737733
29.893865
Results: first two values correspond to leaf 1, last two correspond to leaf 2

I give myself a 10/10 since I was able to implement the outlined procedure and the features I chose proved to be valid for the given set of objects.

Activity 14: Pattern Recognition

In this activity we attempted automated classification. We first extracted the mean features of each class. We then classified test objects by finding the class nearest the object in feature space. I used 4 classes:



Classes
leaf 1/leaf 2/25 centavos/flower

I used 4 features: mean red, mean green, mean blue and shape. To extract a single value for shape, I used the follow function to trace the perimeter of the object the took the ratio of its square to the area. This proved to be a valid measurement as it allowed the distinction between the two leaf classes. I used 5 objects per class to get the mean values and 2 objects per class for testing. My program was able to classify each test object (two tests per class) correctly.

I give myself a 10/10 since I was able to create a program that could classify object based on both color and shape.