%0 Conference Paper
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%D 2011
%T Face tracking in low resolution videos under illumination variations
%A Zou, W.W.W.
%A Chellapa, Rama
%A Yuen, P.C.
%K Adaptation models
%K Computational modeling
%K Face
%K face recognition
%K face tracking
%K GLF-based tracker
%K gradient methods
%K gradient-logarithmic field feature
%K illumination variations
%K lighting
%K low resolution videos
%K low-resolution
%K particle filter
%K particle filter framework
%K particle filtering (numerical methods)
%K Robustness
%K tracking
%K video signal processing
%K Videos
%K Visual face tracking
%X In practical face tracking applications, the face region is often small and affected by illumination variations. We address this problem by using a new feature, namely the Gradient-Logarithmic Field (GLF) feature, in the particle filter framework. The GLF feature is robust under illumination variations and the GLF-based tracker does not assume any model for the face being tracked and is effective in low-resolution video. Experimental results show that the proposed GFL-based tracker works well under significant illumination changes and outperforms some of the state-of-the-art algorithms.
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%I IEEE
%P 781 - 784
%8 2011/09/11/14
%@ 978-1-4577-1304-0
%G eng
%R 10.1109/ICIP.2011.6116672
%0 Conference Paper
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%D 2011
%T Illumination robust dictionary-based face recognition
%A Patel, Vishal M.
%A Tao Wu
%A Biswas,S.
%A Phillips,P.J.
%A Chellapa, Rama
%K albedo
%K approximation theory
%K classification
%K competitive face recognition algorithms
%K Databases
%K Dictionaries
%K Face
%K face recognition
%K face recognition method
%K filtering theory
%K human face recognition
%K illumination robust dictionary-based face recognition
%K illumination variation
%K image representation
%K learned dictionary
%K learning (artificial intelligence)
%K lighting
%K lighting conditions
%K multiple images
%K nonstationary stochastic filter
%K publicly available databases
%K relighting
%K relighting approach
%K representation error
%K residual vectors
%K Robustness
%K simultaneous sparse approximations
%K simultaneous sparse signal representation
%K sparseness constraint
%K Training
%K varying illumination
%K vectors
%X In this paper, we present a face recognition method based on simultaneous sparse approximations under varying illumination. Our method consists of two main stages. In the first stage, a dictionary is learned for each face class based on given training examples which minimizes the representation error with a sparseness constraint. In the second stage, a test image is projected onto the span of the atoms in each learned dictionary. The resulting residual vectors are then used for classification. Furthermore, to handle changes in lighting conditions, we use a relighting approach based on a non-stationary stochastic filter to generate multiple images of the same person with different lighting. As a result, our algorithm has the ability to recognize human faces with good accuracy even when only a single or a very few images are provided for training. The effectiveness of the proposed method is demonstrated on publicly available databases and it is shown that this method is efficient and can perform significantly better than many competitive face recognition algorithms.
%B 2011 18th IEEE International Conference on Image Processing (ICIP)
%I IEEE
%P 777 - 780
%8 2011/09/11/14
%@ 978-1-4577-1304-0
%G eng
%R 10.1109/ICIP.2011.6116670
%0 Journal Article
%J Computer Vision and Image Understanding
%D 2010
%T Comparing and combining lighting insensitive approaches for face recognition
%A Gopalan,Raghuraman
%A Jacobs, David W.
%K Classifier comparison and combination
%K face recognition
%K Gradient direction
%K lighting
%X Face recognition under changing lighting conditions is a challenging problem in computer vision. In this paper, we analyze the relative strengths of different lighting insensitive representations, and propose efficient classifier combination schemes that result in better recognition rates. We consider two experimental settings, wherein we study the performance of different algorithms with (and without) prior information on the different illumination conditions present in the scene. In both settings, we focus on the problem of having just one exemplar per person in the gallery. Based on these observations, we design algorithms for integrating the individual classifiers to capture the significant aspects of each representation. We then illustrate the performance improvement obtained through our classifier combination algorithms on the illumination subset of the PIE dataset, and on the extended Yale-B dataset. Throughout, we consider galleries with both homogenous and heterogeneous lighting conditions.
%B Computer Vision and Image Understanding
%V 114
%P 135 - 145
%8 2010/01//
%@ 1077-3142
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1077314209001210
%N 1
%R 10.1016/j.cviu.2009.07.005
%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 2010
%T Online Empirical Evaluation of Tracking Algorithms
%A Wu,Hao
%A Sankaranarayanan,A. C
%A Chellapa, Rama
%K Back
%K Biomedical imaging
%K Computer vision
%K Filtering
%K formal model validation techniques
%K formal verification
%K ground truth
%K Kanade Lucas Tomasi feature tracker
%K Karhunen-Loeve transforms
%K lighting
%K Markov processes
%K mean shift tracker
%K model validation.
%K online empirical evaluation
%K particle filtering (numerical methods)
%K Particle filters
%K Particle tracking
%K performance evaluation
%K receiver operating characteristic curves
%K Robustness
%K SNR
%K Statistics
%K Surveillance
%K time reversed Markov chain
%K tracking
%K tracking algorithms
%K visual tracking
%X Evaluation of tracking algorithms in the absence of ground truth is a challenging problem. There exist a variety of approaches for this problem, ranging from formal model validation techniques to heuristics that look for mismatches between track properties and the observed data. However, few of these methods scale up to the task of visual tracking, where the models are usually nonlinear and complex and typically lie in a high-dimensional space. Further, scenarios that cause track failures and/or poor tracking performance are also quite diverse for the visual tracking problem. In this paper, we propose an online performance evaluation strategy for tracking systems based on particle filters using a time-reversed Markov chain. The key intuition of our proposed methodology relies on the time-reversible nature of physical motion exhibited by most objects, which in turn should be possessed by a good tracker. In the presence of tracking failures due to occlusion, low SNR, or modeling errors, this reversible nature of the tracker is violated. We use this property for detection of track failures. To evaluate the performance of the tracker at time instant t, we use the posterior of the tracking algorithm to initialize a time-reversed Markov chain. We compute the posterior density of track parameters at the starting time t = 0 by filtering back in time to the initial time instant. The distance between the posterior density of the time-reversed chain (at t = 0) and the prior density used to initialize the tracking algorithm forms the decision statistic for evaluation. It is observed that when the data are generated by the underlying models, the decision statistic takes a low value. We provide a thorough experimental analysis of the evaluation methodology. Specifically, we demonstrate the effectiveness of our approach for tackling common challenges such as occlusion, pose, and illumination changes and provide the Receiver Operating Characteristic (ROC) curves. Finally, we also s how the applicability of the core ideas of the paper to other tracking algorithms such as the Kanade-Lucas-Tomasi (KLT) feature tracker and the mean-shift tracker.
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V 32
%P 1443 - 1458
%8 2010/08//
%@ 0162-8828
%G eng
%N 8
%R 10.1109/TPAMI.2009.135
%0 Conference Paper
%B 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
%D 2010
%T Tracking via object reflectance using a hyperspectral video camera
%A Nguyen,Hien Van
%A Banerjee, A.
%A Chellapa, Rama
%K Computer vision
%K electronic design
%K hyperspectral datacubes
%K hyperspectral image analysis
%K Hyperspectral imaging
%K Hyperspectral sensors
%K hyperspectral video camera
%K Image motion analysis
%K Image sensors
%K lighting
%K Motion estimation
%K motion prediction
%K Object detection
%K object reflectance tracking
%K random projection
%K Reflectivity
%K robust methods
%K Robustness
%K sensor design
%K spectral detection
%K Surveillance
%K tracking
%K Video surveillance
%X Recent advances in electronics and sensor design have enabled the development of a hyperspectral video camera that can capture hyperspectral datacubes at near video rates. The sensor offers the potential for novel and robust methods for surveillance by combining methods from computer vision and hyperspectral image analysis. Here, we focus on the problem of tracking objects through challenging conditions, such as rapid illumination and pose changes, occlusions, and in the presence of confusers. A new framework that incorporates radiative transfer theory to estimate object reflectance and the mean shift algorithm to simultaneously track the object based on its reflectance spectra is proposed. The combination of spectral detection and motion prediction enables the tracker to be robust against abrupt motions, and facilitate fast convergence of the mean shift tracker. In addition, the system achieves good computational efficiency by using random projection to reduce spectral dimension. The tracker has been evaluated on real hyperspectral video data.
%B 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
%I IEEE
%P 44 - 51
%8 2010/06/13/18
%@ 978-1-4244-7029-7
%G eng
%R 10.1109/CVPRW.2010.5543780
%0 Conference Paper
%B IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009
%D 2009
%T Combining powerful local and global statistics for texture description
%A Yong Xu
%A Si-Bin Huang
%A Hui Ji
%A FermÃ¼ller, Cornelia
%K Computer science
%K discretized measurements
%K fractal geometry
%K Fractals
%K geometric transformations
%K global statistics
%K Histograms
%K illumination transformations
%K image classification
%K image resolution
%K Image texture
%K lighting
%K local measurements SIFT features
%K local statistics
%K MATHEMATICS
%K multifractal spectrum
%K multiscale representation
%K Power engineering and energy
%K Power engineering computing
%K Robustness
%K Solids
%K Statistics
%K texture description
%K UMD high-resolution dataset
%K wavelet frame system
%K Wavelet transforms
%X A texture descriptor is proposed, which combines local highly discriminative features with the global statistics of fractal geometry to achieve high descriptive power, but also invariance to geometric and illumination transformations. As local measurements SIFT features are estimated densely at multiple window sizes and discretized. On each of the discretized measurements the fractal dimension is computed to obtain the so-called multifractal spectrum, which is invariant to geometric transformations and illumination changes. Finally to achieve robustness to scale changes, a multi-scale representation of the multifractal spectrum is developed using a framelet system, that is, a redundant tight wavelet frame system. Experiments on classification demonstrate that the descriptor outperforms existing methods on the UIUC as well as the UMD high-resolution dataset.
%B IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009
%I IEEE
%P 573 - 580
%8 2009/06/20/25
%@ 978-1-4244-3992-8
%G eng
%R 10.1109/CVPR.2009.5206741
%0 Conference Paper
%B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
%D 2006
%T A Projective Invariant for Textures
%A Yong Xu
%A Hui Ji
%A FermÃ¼ller, Cornelia
%K Computer science
%K Computer vision
%K Educational institutions
%K Fractals
%K Geometry
%K Image texture
%K Level set
%K lighting
%K Robustness
%K Surface texture
%X Image texture analysis has received a lot of attention in the past years. Researchers have developed many texture signatures based on texture measurements, for the purpose of uniquely characterizing the texture. Existing texture signatures, in general, are not invariant to 3D transforms such as view-point changes and non-rigid deformations of the texture surface, which is a serious limitation for many applications. In this paper, we introduce a new texture signature, called the multifractal spectrum (MFS). It provides an efficient framework combining global spatial invariance and local robust measurements. The MFS is invariant under the bi-Lipschitz map, which includes view-point changes and non-rigid deformations of the texture surface, as well as local affine illumination changes. Experiments demonstrate that the MFS captures the essential structure of textures with quite low dimension.
%B 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition
%I IEEE
%V 2
%P 1932 - 1939
%8 2006///
%@ 0-7695-2597-0
%G eng
%R 10.1109/CVPR.2006.38
%0 Journal Article
%J Pattern Analysis and Machine Intelligence, IEEE Transactions on
%D 2003
%T Lambertian reflectance and linear subspaces
%A Basri,R.
%A Jacobs, David W.
%K 2D
%K 4D
%K 9D
%K analog;
%K analytic
%K characterization;
%K convex
%K convolution
%K distant
%K functions;
%K harmonics;
%K image
%K image;
%K intensities;
%K Lambertian
%K light
%K lighting
%K linear
%K methods;
%K nonnegative
%K normals;
%K object
%K optimization;
%K programming;
%K query
%K recognition;
%K reflectance;
%K reflectivity;
%K set;
%K sources;
%K space;
%K spherical
%K subspace;
%K subspaces;
%K surface
%X We prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image.
%B Pattern Analysis and Machine Intelligence, IEEE Transactions on
%V 25
%P 218 - 233
%8 2003/02//
%@ 0162-8828
%G eng
%N 2
%R 10.1109/TPAMI.2003.1177153