Due to the power of search and the advantage of open access to scholastic work, I feel there is no longer a need to maintain on my website a list of published papers.

Instead, you can find most of my papers online at these sites --- pick your favorite (mine is arXiv)



For convenience, only the most recent official publications are listed.

Google Scholar

(sorted reverse chronologically)

+ Follow

DBLP

(from 2002 - today)

+ Follow

arXiv

(in posted order)

+ Follow

Microsoft Academic Search

(sorted reverse chronologically)

+ Follow

Academia.edu

+ Follow

Filter by type:

Sort by year:

Active multi-view object recognition: A unifying view on online feature selection and view planning

Christian Potthast, Andreas Breitenmoser, Fei Sha and Gaurav Sukhatme
Journal Paper Robotics and Autonomous Systems, Volume 84, October 2016, Pages 31-47

Abstract

Many robots are limited in their operating capabilities, both computational and energy-wise. A strong desire exists to keep computation cost and energy consumption to a minimum when executing tasks like object recognition with a mobile robot. Adaptive action selection is a paradigm, offering great flexibility in trading off the cost of acquiring information against making robust and reliable inference under uncertainty. In this paper, we study active multi-view object recognition and describe an information-theoretic framework that combines and unifies two common techniques: online feature selection for reducing computational costs and view planning for resolving ambiguities and occlusions. Our algorithm adaptively chooses between the two strategies of either selecting only the features that are most informative to the recognition, or moving to a new viewpoint that optimally reduces the expected uncertainty on the identity of the object. This two step process allows us to keep overall computation cost minimal but simultaneously increase recognition accuracy. Extensive empirical studies on a large RGB-D dataset, and with two different feature sets, have validated the effectiveness of the proposed framework. Our experiments show that dynamic feature selection alone reduces the computation time at runtime 2.5–6 times and, when combining it with viewpoint selection, we significantly increase the recognition accuracy on average by 8%–18% absolute compared to systems that do not use these two strategies. By establishing a link between active object recognition and change detection, we were further able to use our framework for the follow-up task of actively detecting object change. Furthermore, we have successfully demonstrated the framework’s applicability to a low-powered quadcopter platform with limited operating time.

Supervised Word Mover's Distance

Gao Huang, Chuan Guo, Matt Kusner, Yu Sun, Fei Sha and Kilian Weinberger.
Conference Papers Proc. of NIPS, Barcelona, Dec., 2016

Abstract

Accurately measuring the similarity between text documents lies at the core of many real world applications of machine learning. These include web-search ranking, document recommendation, multi-lingual document matching, and article categorization. Recently, a new document metric, the word mover’s distance (WMD), has been proposed with unprecedented results on kNN-based document classification. The WMD elevates high quality word embeddings to document metrics by formulating the distance between two documents as an optimal transport problem between the embedded words. However, the document distances are entirely unsupervised and lack a mechanism to incorporate supervision when available. In this paper we propose a supervised metric learning variant of this distance, which we call the Supervised WMD (S-WMD). Our algorithm learns document distances that approximate the semantic meaning of similarity as well as the importance of individual words encoded in the supervision. This is achieved with an affine transformation of the underlying word embeddings, which is learned to minimize the stochastic leave-one-out nearest neighbor classification error on a per-document level. Empirically, S-WMD performs extremely well. We evaluate the distance on eight real-world text classification tasks on which S-WMD consistently outperforms almost all of our 30 competitive baselines.

A comparison between deep neural nets and kernel acoustic models for speech recognition

Zhiyun Lu, Dong Quo, Alireza Bagheri Garakani, Kuan Liu, Avner May, Linxi Fan, Michael Collins, Brian Kingsbury, Michael Picheny, and Fei Sha
Conference Papers Proc. of ICASSP, Shanghai, April, 2016

Abstract

We study large-scale kernel methods for acoustic modeling and compare to DNNs on performance metrics related to both acoustic modeling and recognition. Measuring perplexity and frame-level classification accuracy, kernel-based acoustic models are as effective as their DNN counterparts. However, on token-error-rates DNN models can be significantly better. We have discovered that this might be attributed to DNN's unique strength in reducing both the perplexity and the entropy of the predicted posterior probabilities. Motivated by our findings, we propose a new technique, entropy regularized perplexity, for model selection. This technique can noticeably improve the recognition performance of both types of models, and reduces the gap between them. While effective on Broadcast News, this technique could be also applicable to other tasks.

Video Summarization with Long Short-term Memory

Ke Zhang, Wei-lun Chao, Fei Sha, and Kristen Grauman
Conference Papers Proc. of ECCV, Amsterdam, Oct., 2016

Abstract

We propose a novel supervised learning technique for summarizing videos by automatically selecting keyframes or key subshots. Casting the problem as a structured prediction problem on sequential data, our main idea is to use Long Short-Term Memory (LSTM), a special type of recurrent neural networks to model the variable-range dependencies entailed in the task of video summarization. Our learning models attain the state-of-the-art results on two benchmark video datasets. Detailed analysis justifies the design of the models. In particular, we show that it is crucial to take into consideration the sequential structures in videos and model them. Besides advances in modeling techniques, we introduce techniques to address the need of a large number of annotated data for training complex learning models. There, our main idea is to exploit the existence of auxiliary annotated video datasets, albeit heterogeneous in visual styles and contents. Specifically, we show domain adaptation techniques can improve summarization by reducing the discrepancies in statistical properties across those datasets.

An Empirical Study and Analysis of Generalized Zero-Shot Learning for Object Recognition in the Wild

Wei-Lun Chao, Soravit Changpinyo, Boqing Gong, and Fei Sha.
Conference Papers Proc. of ECCV, Amsterdam, Oct., 2016

Abstract

We investigate the problem of generalized zero-shot learning (GZSL). GZSL relaxes the unrealistic assumption in conventional ZSL that test data belong only to unseen novel classes. In GZSL, test data might also come from seen classes and the labeling space is the union of both types of classes. We show empirically that a straightforward application of the classifiers provided by existing ZSL approaches does not perform well in the setting of GZSL. Motivated by this, we propose a surprisingly simple but effective method to adapt ZSL approaches for GZSL. The main idea is to introduce a calibration factor to calibrate the classifiers for both seen and unseen classes so as to balance two conflicting forces: recognizing data from seen classes and those from unseen ones. We develop a new performance metric called the Area Under Seen-Unseen accuracy Curve to characterize this tradeoff. We demonstrate the utility of this metric by analyzing existing ZSL approaches applied to the generalized setting. Extensive empirical studies reveal strengths and weaknesses of those approaches on three well-studied benchmark datasets, including the large-scale ImageNet Full 2011 with 21,000 unseen categories. We complement our comparative studies in learning methods by further establishing an upper-bound on the performance limit of GZSL. There, our idea is to use class-representative visual features as the idealized semantic embeddings. We show that there is a large gap between the performance of existing approaches and the performance limit, suggesting that improving the quality of class semantic embeddings is vital to improving zero-shot learning.

Summary Transfer: Exemplar-Based Subset Selection for Video Summarization

Ke Zhang, Wei-Lun Chao, Fei Sha, and Kristen Grauman
Conference Papers Proc. of CVPR, Las Vegas, June., 2016

Abstract

Video summarization has unprecedented importance to help us digest, browse, and search today's ever-growing video collections. We propose a novel subset selection technique that leverages supervision in the form of human-created summaries to perform automatic keyframe-based video summarization. The main idea is to nonparametrically transfer summary structures from annotated videos to unseen test videos. We show how to extend our method to exploit semantic side information about the video's category/genre to guide the transfer process by those training videos semantically consistent with the test input. We also show how to generalize our method to subshot-based summarization, which not only reduces computational costs but also provides more flexible ways of defining visual similarity across subshots spanning several frames. We conduct extensive evaluation on several benchmarks and demonstrate promising results, outperforming existing methods in several settings.

Synthesized Classifiers for Zero-Shot Learning

Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha
Conference Papers Proc. of CVPR, Las Vegas, June., 2016

Abstract

Given semantic descriptions of object classes, zero-shot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of "phantom" object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.

Metric Learning for Ordinal Data

uan Shi, Wenzhe Li, and Fei Sha
Conference Papers Proc. of AAAI, Arizona, March., 2016

Abstract

A large amount of ordinal-valued data exist in many domains, including medical and health science, social science, economics, political science, etc. Unlike image and speech datasets of real-valued data, learning with ordinal variables (i.e., features) presents unique challenges. In particular, the nominal differences between those feature values, which are just ranks, do not necessarily correspond to the real distances between the corresponding categories. Given their wide existence, it is imperative to develop machine learning algorithms that specifically address the need to model and infer with such data. In this paper, we present a novel metric learning algorithm that takes into consideration the nature of ordinal data. Our approach treats ordinal values as latent variables in intervals. Our algorithm then learns what those intervals are as well as distance metrics to measure distances between latent variables in those intervals. We derive the corresponding optimization algorithm and demonstrate how that can be solved effectively. Experimental results show that the proposed approach significantly improves baselines that do not explicitly model ordinal features.