Wednesday, January 24, 2007

Post. Response.

I agree that our paper should focus more on the mathematical technique. As such I'm wondering about the type of real world experiments that we should include, because, of course, without some really good real world results, our goose is as good as cooked. It seems to me that there is a bit of a trade off between two factors concerning the real world data:

One factor tends toward the simplicity of any tests that we include. In other words we should keep the descriptions of tests quick and simple so as not to dilute the core of the paper, i.e., the focus on the mathematical technique. This means performing tests that do not require significant explanation or specialization, or at the very least, we should distill the explanations to as simple a form as possible.

At the same time I feel we should be cautious about keeping our experiments too simple. My main concern is that simply comparing our method on a few data sets and publishing our results isn't going to be interesting enough to warrant an acceptance into the conference. Naturally (and hopefully) once we run these experiments we'll be able to find something interesting to say beyond just comparing methods. Specifically something we talked about in meeting today regarding quantitative data we could report, was comparing the variance explained versus sparsity. This would be appropriate toward PCA and CCA methods. As to FDA methods we talked today about using datasets that were used in prior sparse SVM papers. (Something I'll look into.) I could also talk to Ton about more sparse data sets.

The approach we're taking for now, seeing as how we have a semi-comfortable cushion of time left before the deadline, is to run this baby on as many tests as possible and choose a direction once the smoke clears. So if PCA methods work out best we can focus more on the speed/performance benefits of our new sparse method, where as if we can include an interesting CCA test, that would be a good contribution for this paper seeing as how there is fewer sparse CCA papers out there.

On 1/24/07, Bharath Kumar SV wrote:Hi Gert

Me and David have been working on this sparse eigen value problems. i
extended the framework of jason weston and it kind of comes out to be
a neat framework and seems to be working. but we need to set up
experiments similar to ur SIAM paper so as to compare this approach.
interestingly, i was re-reading ur sparse pca paper and finally
realized that the reason for SDP might be to make the problem linear
in its objective as we are maximizing a convex objective. even without
zero-norm constraint and just using L1 constraint, the problem is
still hard to solve as it is maximization of a convex function. to
over come that, u guys have done this lifting stuff and then relaxed
the zero-norm constraint. interestingly for sparse pca, the zero-norm
relaxation is effectively yielding 1-norm relaxation.

Now, if you remember the short report i showed following Laurent El
Ghaoui's work. I realized that there we have maximzation of sum of max
functions..which agian is hard to solve. I am actually going to do
Weston type extension to that method and see how it relates to the
present algorithm we have.

We can target for ICML as a more general paper on sparse component
analysis where we talk about sparsity for generalized eigen value
problems and then how pca, cca, fda, sparse dictionary learing etc.
falls into this framework. But somehow I am wondering whether it
dilutes the paper by not focussing on one thing. The other approach i
am thinking is do this generalized eigen value problem and reduce pca
as a special case and do the type of experiements you people have done
showing the validity of the method. this covers the unsuprevised part.
for the supervised part, we cna use fda and try to learn sparse linear
discriminators. people have done feature selection for svms and we do
it using fda.

The other things we might give a passing reference and we can sum it
all in a good journal paper.

Please let us know what you think.

Regards
Bharath

0 Comments:

Post a Comment

<< Home