Link to the paper
pdf
Authors & Group:
Tomasz Malisiewicz, A
Intuition:
Building classifiers for various tasks has been extensively researched upon by the computer vision and machine learning communities. There are various parametric methods like Suppor Vector Machines, Neural Networks, Linear/Quadratic Discriminants and various non-parametric methods like Voronoi diagrams, K-nearest neighbors etc. Each have their own advantages/disadvantages. One fundamental disadvantage of using a trained classifier for a specific task with lot of training data is does the training data correctly parameterize the model and is the classifier generalized enough to incorporate the possible variations. This paper tries to solve both problems by using just one training image for each classifier. So, they build an ensemble(collection) of classifiers trained with one positive instance i.e. the exemplar(example) and millions of negative images. The key motivation is this method represents the positives in a non-parametric way and the negatives by semi-parametric or parametric way. In the first figure you see that they have a Linear SVM classifier for each positive example instead of trying to come up with a classifier for set of positive examples.
Figures
Previous work
Previous work is very huge because they apply this idea to various problems like meta-data transfer, segmentation, geometry estimation, 3D model transfer and RElated Object Priming. But, previous techniques trained classifiers with some positive data and lot of negative data. The downfall is does the trained classifier accurately capture the model and does it generalize well enough to detect variations to the model?
Gory details
The Math is non-existant or very simple but one detail that is important is these ensemble of classifiers give different scores and comparing them will be like comparing apples and oranges. So, there is an additional calibration step where they modify the distance function of the SVM's. According to the authors a simple way to interpret the calibration step is to think of it as a scaling and shifting of the decision boundary. See figure below:
Final Output:
Future work ( according to authors )
There is no specific future plan given by the authors but this contribution opens up doors for many exciting applications in object recognition, scene understanding and computer graphics.
Future ideas by me ( if I can think of any )
I had two ideas that I wanted to explore:
1) The idea is to start with an exemplar SVM and to incrementally add training data to this model and retrain it periodically. This way you are adding examples that satisfy the underlying parametric model with caution
2) The other idea was to build relations between these exemplars and make a hierarchical classifier that will make the problem of scene understanding more semantic and might even improve the accuracy over existing methods.
Authors & Group:
Tomasz Malisiewicz, A
bhinav Gupta,
Alexei A. Efros from CMU
Intuition:
Building classifiers for various tasks has been extensively researched upon by the computer vision and machine learning communities. There are various parametric methods like Suppor Vector Machines, Neural Networks, Linear/Quadratic Discriminants and various non-parametric methods like Voronoi diagrams, K-nearest neighbors etc. Each have their own advantages/disadvantages. One fundamental disadvantage of using a trained classifier for a specific task with lot of training data is does the training data correctly parameterize the model and is the classifier generalized enough to incorporate the possible variations. This paper tries to solve both problems by using just one training image for each classifier. So, they build an ensemble(collection) of classifiers trained with one positive instance i.e. the exemplar(example) and millions of negative images. The key motivation is this method represents the positives in a non-parametric way and the negatives by semi-parametric or parametric way. In the first figure you see that they have a Linear SVM classifier for each positive example instead of trying to come up with a classifier for set of positive examples.
Figures
Previous work
Previous work is very huge because they apply this idea to various problems like meta-data transfer, segmentation, geometry estimation, 3D model transfer and RElated Object Priming. But, previous techniques trained classifiers with some positive data and lot of negative data. The downfall is does the trained classifier accurately capture the model and does it generalize well enough to detect variations to the model?
Gory details
The Math is non-existant or very simple but one detail that is important is these ensemble of classifiers give different scores and comparing them will be like comparing apples and oranges. So, there is an additional calibration step where they modify the distance function of the SVM's. According to the authors a simple way to interpret the calibration step is to think of it as a scaling and shifting of the decision boundary. See figure below:
Final Output:
Future work ( according to authors )
There is no specific future plan given by the authors but this contribution opens up doors for many exciting applications in object recognition, scene understanding and computer graphics.
Future ideas by me ( if I can think of any )
I had two ideas that I wanted to explore:
1) The idea is to start with an exemplar SVM and to incrementally add training data to this model and retrain it periodically. This way you are adding examples that satisfy the underlying parametric model with caution
2) The other idea was to build relations between these exemplars and make a hierarchical classifier that will make the problem of scene understanding more semantic and might even improve the accuracy over existing methods.