Tag Archives: FYP

Finally the long awaited Final Year Project Done ! :D

The FYP or the Final Year Project as they all call it at APIIT, is finally done for our batch  !8 Months of torture finally came to an end ! 😀 So i thought of leaving a small note on my Final Year Project.

The title of my Final year Project was “Augmented Reality based Product Identification and Advertising System”. Which is a system where users can identify Products or Advertisements using their mobile phones in real time.

Anyhow after going through weeks of sleepless nights, endless number of experimentations and self learning, I was able to develop and implement in a middle of huge amount of technological difficulties. I used SURF Feature Extraction Algorithm as the base Image Matching Algorithm for this System, even though this is the first time anyone in our University has used Computer Vision based algorithm for image matching as I have been told, due to the impossibility of it or may be even more because people actually haven’t looked into it properly.

Anyhow I may give a descriptional explanation on my FYP later, but for the moment, as leaving a note, I was able to successfully complete my FYP even though some of the objectives of the system could not be achieved, well not completely lets say, due to the technological difficulties or more over due to the time constrains. Now you must be wondering why am I blaming for the time constrains, well you would never understand that phrase unless you are an APIITian, because we all know the tough deadlines and submissions we get at APIIT, where as even for the Final Year Project, no matter what, you need to submit on the given date. Yes we do deal with extreme stressed up, packed up deadline submissions. Lemme give you a heads up, Final assignment Submissions, Final Exams, and after that within less than a month Final Year Project Submission.

Now you may say dafaq dude, less than a month is more than enough. Listen you bloody idiotic retard, this is the FYP we are talking about, which is not just another CRUD application or some lame-ass inventory control shit. We are suppose to research, self learn, experiment and innovate something under the topic which we have decided, that is the whole point of it. In mine i was never in to Image Processing, to be honest I haven’t even had any idea about Image Processing until i started researching for my FYP. And then imagine using all that research and developing something completely new out of all those theories that you self learn ! Yeah, enough of me boasting ! lol

We are suppose to do a complete extensive Documentation along with the given proper standards of APIIT, where as mine was 217 Pages, along with 37607 Words ! Yeah well more like I wrote my own personal Bible or something lol ! Yeah now you can imagine, think of it even typing such amount of words thinking from one’s head ? Yeah that is what am talking about. lol. Well not to mention the fact that when I get enthusiastic and passionate about something I could write a whole book about it. 😛

So the next step is the Final Presentation, which is due on next 20th ! Scares me when I even  think of it, but hoping for the best with the determination !

Woah ! What a feeling of relief ! 😀

Final Year Project Log – Image Matching Experiment using SURF Feature Extraction

This post is regarding my Final Year Project I’m working on for my degree in Software Engineering at APIIT, which I’m working on an Augmented Reality based Product Identification and Advertising System.

Through this post I’m keeping a log book of the experiments I’m running for my utilized Image Matching algorithm based on SURF Feature Extraction and Matching algorithm.
This post is no educational or knowledge source, just some post that I’m keeping online to mark the progress of my experiment.

Language – C#
API – EmguCV (C# Wrapper for OpenCV)

Log 1 – Successfully installed and configured EmguCV in the Laptop running Window 8 Pro and imported all the required dll files for the project solution in Visual Studio 2012.

Log 2 – Implemented Simple Image Matching code and was able to find the matching points based on Extracted Feature points of a given Source Image and a Model Image.

Model Image – the Model Image that you want to look for in other image sceneries.
Source Image – the actual scenery to look up for the given image.

Log 3 – Started SURF Experiment project solution and implemented the following functionality,

– Load the given set of images to the memory at Run time and their feature points will be extracted and saved in the Memory. This was done in order to make the execution, as in the matching process faster.

– Look up for the brand logo of a given Product Image by extracting the Feature points of the given image and matching them with the feature points of the existing brand logos.

Log 4 – Oki now the process does not seem to be working for images that are from other sources, rather than a model image captured from the actual scenery image.

The model Image or the brand logo image in this case has to be an image which is being captured from the same exact product image that we are giving in to the system. The matching process does not work for custom logo images which are from other sources. Now lets say if we download a logo image from the internet and add it to the logo image library and load it to the program, and I give the actual product image in order to find the matching brand logo, it would not give accurate result and catch the above given logo as the result.

Concluded Reason based on Knowledge – I guess this is because of the essence of the SURF Algorithm, frankly speaking its capable of detecting a given object from a scenery where the object actually includes in the scenery. And now if we apply it to this scenario, the given logo is from another source and its feature points, colors, pixels are different from the product image that we are going to identify, therefore the algorithm would not detect it as the result logo.

This could be a constraint, but it would not matter if I implement an option to let the admin user capture the exact logo from the given product image and store that logo in the Database ! 😀

Log 5 – The Important attributes in extracting the matching results from the Algorithm in Code,

Matrix<int> indices – Still have no idea, but it stores some X,Y points based on the matched points I suppose.
Matrix<byte> mask – Store the results of filtering the matched features and votes for the points given by the algorithm. This matrix exactly includes the number of matched feature points. 

The actual voting for matched feature points happens in here –

using (Matrix<float> dist = new Matrix<float>(observedDescriptors.Rows, k))
 matcher.KnnMatch(observedDescriptors, indices, dist, k, null);
 mask = new Matrix<byte>(dist.Rows, 1);
 Features2DToolbox.VoteForUniqueness(dist, uniquenessThreshold, mask);

And I have discovered that using those values, specially “mask ” matrix value, I could determine something like the percentage of the matching between two images which actually stores the voting results of the Feature point matching. And then I could take the highest matching points count and get the best match for identifying the product logo.

But there is a drawback in that when I actually executed the process, that is the SURF detector detects several matching feature points in a given Product Logo image and a Product Image, where sometimes it detects randomly more than the actual supposed result, which leads to incorrect results. Therefore I need to look for a better approach.

Looking for a Better Approach… Rather than counting the number of matching points…

Log 6 – I just figured out that In order to solve this I need to consider some factors regarding the area of the detected results, as in when the algorithm look for the product logo in the given product image, it specifically pinpoints to a specific area of the logo in the product image. If I could calculate that that determine it, BOOM ! its done !

Here are some of the factors I found and their samples,

Test Samples –

This is how the algorithm actually detects the logo in the given product Image,

As you can see in the above Image in the right side the UI displays the Logo image and in the left side it displays the given product Image. The dots represents the detected feature points of the both images and those lines in between the two images determines and maps the matching feature points of the two images. As in how it displays where the logo is detected in the given product Image.

Pretty cool eh ! 😀

Then when I started digging deeper I noticed that those mapped lines actually forms some kind of a polygon around the detected logo area in the product image. So I edited the code to draw a line around it.

The for a clear view I painted the formed polygon, just for the sake of viewing ease.Capture3
Looks perfect eh ! 😉 Now this should lead me to something useful while determining the exact accurate result.

Chasing the Polygon………..

Log 7 – I noticed that the formed Polygon on incorrect results or i n those results where the logo can not be detected, takes a malformed shape. Such as below,


And on some of the matching it does not even draw out the polygon as an example,

Now this seems to provide a great opportunity, as in if I could determine whether its a properly formed polygon or not and whether it is actually a polygon in the given matching result, then I could easily figure out the perfect accurate match.

This is where the Polygon is initiated from the Homography matrix,

#region draw the projected region on the image
 if (homography != null)
 { //draw a rectangle along the projected model
 Rectangle rect = modelImage.ROI;
 PointF[] pts = new PointF[] { 
 new PointF(rect.Left, rect.Bottom),
 new PointF(rect.Right, rect.Bottom),
 new PointF(rect.Right, rect.Top),
 new PointF(rect.Left, rect.Top)};

Log 8 – Right now at this point I am calculating the below values in order to determine the perfect match result of the product image’s logo,

  • Number of Matching points count
  • Area of the Polygon
  • Number of Matching points within the Polygon Area

based on these values of each and every matching circle i was able to determine the best matching result.

Log 9 – Right now at this point I’m going through a crisis issue where there is no possible way to determine a matching instance of when a user inserts a product image which is not available in the database to find the product logo. As in lets say the user inserts a product image where it’s product logo isn’t available in the database, right now there is no way to determine whether it could be found or not.

This is because of the fact that, even though the number of matching counts goes down, some other factor such as area of the polygon, or may be if the area of the polygon goes down the number of matching points could increase. This situation changes from one image to another unpredictably and of course I tried my best to track down a pattern for this, but it was also unsuccessful as it differs from one image to another ! 😦

 Suggested Solutions at the point –

Get rid of the process of Identifying the Brand Logo of a given product Image and directly move on to visual product identification. This could take a long time to give a result.

As you can see the reason why I came up with this, “brand Logo Recognition process” was to increase the speed of the product identification. There is a huge set of product images in the database which are related to each and every brand type. So rather than search through all the product images, what if I identify the brand type first and then search through only the products that falls under that brand type ? Which will definitely increase the speed of matching reduce unwanted time consumption in matching.

Right at this point I’m pretty confused about what I should do, but I may have to go through the above solution ! :\ Must contact the supervisor and assessor immediately.

Log 10 – I contacted my Assessor lecturer of my FYP and presented him this issue. He came up with an amazing idea of taking the percentage of detected feature points between the Model, Logo Image and the Scenery, Product Image. This was a lil confusing for me at once but then later only it triggered me ! 😀

Therefore following that lead, I took the percentage of the difference between the Logo and the Given Product Image at every circulation of the matching and kept that value for later use in order to determine which gives the best percentage.

 percentage = (( (double)result.modelDescriptors.Rows) / (double)result.observedDescriptors.Rows) * 100;

In the above code I get the percentage and store it with the index number of the corresponding Logo and Product.

Then at the end, the matching percentage has to be over 20% and the Polygon Area should not be 0 and the Number of the matches inside the polygon area has to be over 0, otherwise it would be identified as unidentifiable.

if (currentLargestPercentage <= 20)
    if ((currentlargestArea == 0 && currentlargestMatchesInPolygonArea == 0))
       MessageBox.Show("Sorry ! This can not be Identified ! :( ");

    if (currentlargestMatchesCount > 10)


Log 11 – Using the above found method I was able to successfully identify the Product images which can not be identified with the Logo Images in the database. But the problem is sometimes it doesn’t identify the products which can even be identifies with the product logo in the database 😦 ! I tried changing the matching percentage but nothing seems to be working, when I change it, the product images which can be identified doesn’t get identified and  the ones that can not be identified comes up with wrong matches. 

So now I’m in need of a new approach for this, something similar to the same method.

Log 12 – The I used “descriptors” of the model and the observed images, which has the computed values of the given image and the points locations because of its convenience, as it appeared to be it was holding some significant set of values. But later I found out there is not much of use of it in take the percentage so i thought of switching to the “vector of feature points” of both logo and product image, where as those vectors holds the values for the detected SURF Feature points. Therefore according to that,

 percentage = (((double)result.modelKeypoints.Size) / ((double)result.observedKeypoints.Size)) * 100;

So now this seems like a perfect match by getting the ration between the keypoints of the model image and observed image. And then I did some tuning in matching identification condition.

if ((currentlargestArea == 0 && 
currentlargestMatchesInPolygonArea == 0) ||
currentLargestPercentage <= 5)
         MessageBox.Show("Sorry ! This can not be Identified ! :( ");

Now this appears to be giving fair results even though it is not 100% accurate, but according to the time constraints I should not waste anymore time with the matching process.

Screw this torture, I finished developing and successfully Implemented ! PS – Based on the above experimentations and results ! 😉
Hope it may help any of you !
Cheers ! 😀