Author Archives: rahulagrawal1487

Boosting vs SVM

It is well known that support vector machines are among the best general-purpose classification algorithms in use today. The strength lies in the way that a separating hyperplane is generated to “best-separate” two classes of data i.e. by fitting a hyperplane which maximizes that distance between the plane and elements of each class on either side of the plane.

In cases where data is not linearly separable, SVM’s rely on projecting data into higher-dimensions where classes might be separable. Depending on the situation, this formalism may or may not work well. Another method named Boosting has been shown to perform well in a variety of scenarios, and is available for consideration.

Boosting builds a strong classifier from an ensemble of weak classifiers (Freund & Shapire, 1995, Friedman, Hastie, Tibshhirani, 1998). Consider the toy problem of building a classifier where one class has a circular distribution and another has a concentric distribution to the first. The idea is to start with a simple linear classifier that has a better-than-random chance at classification. Next follows an iterative procedure where the mis-classified examples from the first stage are “boosted” in weight, and new hyperplanes are designed to best separate the newly weighted dataset. This results in a sequence of hyperplanes fitted to the data such that their aggregate, when duly weighted, can separate the datasets well.

Boosting has been used to great effect in the Viola-Jones face detection algorithm (coming soon… )

Advertisements

Face Recognition – the basics

One of the beginner topics in a computer vision class is face recognition. Given some assumptions, it is now a solved problem in the AI community – or at-least one which has been given sufficient amount of attention that there are numerous methods that work well at scale in a lot of scenarios.

As part of an effort at maintaining and growing my knowledge-base, I am in the process of spending some amount of time in reviewing course materials that I’d once wisely stored away for posterity. I’ve re-begun with CS231a – the Stanford University course on Computer Vision.

The first lecture discussed two popular methods for face recognition – the first being EigenFaces – a seminal method developed at MIT – http://www.face-rec.org/algorithms/PCA/jcn.pdf. The gist of it is finding the eigen-vectors of the aggregate vectorized image patch training sets – thus significantly reducing the dimensionality of the problem. Given a new image patch vector, find its projection in the lower-dimensional sub-space and run an OTS classifier on it to tell which one of the trained image patch classes (or faces) it is closest too. Nearest neighbor etc. should be sufficient. The eigenvector formulation ensures dimensionality-reduction while preserving the dimensions of maximum variance so that, in a sense, an image patch, say 90×90 (or a vector or length 8100), is compressed to, say, 20 dimensions, such that these 20 dimensions represent most of the variance that exists in image patches of faces. “PCA projection is optimal for reconstruction from a low-dimensional basis but may not be optimal for discrimination”.

The second method discussed was FischerFaces – which improved face recognition significantly over EigenFaces. http://www.cs.columbia.edu/~belhumeur/journal/fisherface-pami97.pdf. The motivation for the method is that EigenFaces, which doing well on dimensionality-reduction, does not preserve between-class separation as a result of doing PCA. This is nicely illustrated on page 4 of the paper in a diagram. It shows through a toy example how choosing the principal component causes a complete loss in separability between classes.

A lot of paper-reading lies ahead…

Linux can be frustrating v1

In part 1 of an almost certainly recurring series of posts on the subject, I learned about the frustrations that can come from naive Linux operation.

For some background on the subject, I own, operate and cherish a laptop that is almost one-third my age. Most of it is still going strong, and with Linux 10.04, runs almost as fast as the better half’s Windows 7 laptop that’s a couple of years old – at-least on common tasks as web-browsing, cat videos etc.

Currently, the only iffy part of the machine is its battery and charging situation. The original battery is intact and gives me a strong 15 minutes on a full charge. The charging point is temperamental so the cable needs to be held in just the right way for the thing to charge. It’s basically at a point where any expense to improve its lot might as well be used to fund a new machine.

I’d loathe to part with it because of sentimental attachment as well as sheer bloody-mindedness. Each year it accumulates as a functioning machine, in some strange way, makes me proud of myself for perseverance, tolerance and the development of my hardware maintenance skills.

Having said that, my resolution to persevere was greatly tested this past weekend. During some web-browsing the OS froze so I did a hard restart. Although it must be admitted that I was remiss in doing so without trying something else first – at the least a soft restart or kill -9 or something.

On next boot-up I was presented with the error as discussed here: http://askubuntu.com/questions/25695/initramfs-error-during-boot

After the first few panicked gulps, a Google search revealed the above solution. However, my Linux-live startup USB gives me this error: https://bugs.launchpad.net/ubuntu/+source/usb-creator/+bug/645818

Solving that with a simple <TAB> input, which brings up the list of available commands and then keying in ‘live’ brought me to the next problem. Ubuntu would not boot beyond showing me its logo with a looping line of filled dots. After more soul-searching I reached a point of browsing around for new laptops on Amazon.

Then, the significant other was kind enough to hold my hand and ask me to persevere. Some banging-on-the-keyboard-ing later, a new error message showed up: http://ubuntuforums.org/showthread.php?t=1345125

After creating a new startup usb with a different Ubuntu Distribution (12.04 LTS) with unetbootin from Windows. Things got back to normal and now my beloved is alive again.

It helped me learn several things: The nature of Ubuntu ensures that such problems don’t go away easily – indeed are par for the course for one who decides to throw in his lot with it, instead of Windows or Mac. But there are tons of other people with the same problems and a virtual army of faceless and apparently tireless people have helped solve most, if not all, of them, for no pay and scant praise. There is hope for humanity yet.

I think I’m a better person after this ordeal

Linux is awesome v1

In part 1 of a (potentially) recurring series of posts on the subject, I learned the use of Linux’s awk command, that can be used to parse a text file and return textual tokens that occur after a specified input token.

The command is used as follows:

> cat textfile | awk -F “<input token text> ” ‘{print $2}’

Cat the textfile so its contents are scanned through. Then the awk command is given the <input token text>, such that it locates the token in each line of the file and tokenizes the remainder of the line. The part of the line preceding the token can be printed by {print $1} and the part after the token by {print $2}

The right Kalman Filter model is hard to design

One can get reasonable accuracy with a Kalman Filter even with a poorly hacked together model. As a naive dabbler in these matters, I learned recently that though the Kalman Filter states you observe might be filtered suitably, the hidden, or rather, unmeasured states may not necessarily yield sensible values . Also, increasing the model complexity doesn’t necessarily improve matters if your fundamental assumptions about the model are flawed to begin with.

The measurement and model covariance matrices don’t change their values over time. They remain at the initialized values. However, the pre and posterior error co-variance values constantly change. The variances of the states start at fairly low values. As time progresses, the variance values tend to increase and then stabilize at fairly high values. This is probably an artifact of the model selection being poor.

It was also observed that the corrected state at the end of a cycle, if propagated forward in time, yields the predicted state for the next cycle. (In retrospect this should have been obvious – but one has a lot to learn)