Rishabh

Mehrotra

Areas of Interest

[To be updated soon]

Motivated by the diverse scope of Machine Learning algorithms and the multidisciplinarity they foster, I have been involved in independant reading and experimenting in this field for almost a year now. I have developed a keen interest in methods which provide means to better represent data in a rich, deep and sparse representation. The topics which interest me include: Transfer Learning, Unsupervised Feature Learning and Sparse Representation. By utilizing the aforementioned machine learning techniques, the main goal of my research is to make use of vastly available unlabelled data and scale machine learning algorithms for large scale learning using sparseness constraints.

Examples of research topics that I am interested in include:


Transfer Learning

Most Machine Learning algorithm work well only under a common assumption that the training and test data are drawn from the same distribution. When the distribution changes, the models are used to be rebuilt using new training data. Transfer Learning between two task domains is an effort to reduce the effort to recollect data. Another related area is Domain Adaptation where the training examples are drawn from a source domain distinct from the target domain from which the test examples are extracted

back to top

Unsupervised Feature Learning

Over the years researchers have developed hand-engineered features for various leanring tasks, which involve spending long time in extracting meaningful representation from the data at hand. Unsupervised Feature Learning is about methods for unsupervised feature learning and deep learning, which automatically learn a good representation of the input from unlabeled data. Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics.

back to top

Sparse Representations

Sparse representations are representations that account for most or all information of a signal with a linear combination of a small number of elementary signals called atoms. Sparseness is one of the reasons for the extensive use of popular transforms such as the DFT, the wavelet transform and the SVD. Sparse and Low Rank models have led to increasingly concise descriptions of high-dimensional data. Sparse representations have therefore increasingly become recognized as providing extremely high performance for applications as diverse as: noise reduction, compressive sensing, feature extraction, pattern classification and blind source separation.

back to top

I am still at a very nascent stage of learning and try to keep myself updated with latest developments in the research community.

If you are interested, feel free to contact me as exciting collaborations are always welcome.