%0 Journal Article
%J Journal of microscopy
%D 2013
%T Segmenting time-lapse phase contrast images of adjacent NIH 3T3 cells.
%A Chalfoun, J
%A Kociolek, M
%A Dima, A
%A Halter, M
%A Cardone, Antonio
%A Peskin, A
%A Bajcsy, P
%A Brady, M.
%K Animals
%K Cell Adhesion
%K Cell Count
%K Cell Division
%K Cell Shape
%K Computational Biology
%K Fibroblasts
%K Image Processing, Computer-Assisted
%K Mice
%K Microscopy, Phase-Contrast
%K NIH 3T3 Cells
%K Reproducibility of results
%K Sensitivity and Specificity
%K Time-Lapse Imaging
%X We present a new method for segmenting phase contrast images of NIH 3T3 fibroblast cells that is accurate even when cells are physically in contact with each other. The problem of segmentation, when cells are in contact, poses a challenge to the accurate automation of cell counting, tracking and lineage modelling in cell biology. The segmentation method presented in this paper consists of (1) background reconstruction to obtain noise-free foreground pixels and (2) incorporation of biological insight about dividing and nondividing cells into the segmentation process to achieve reliable separation of foreground pixels defined as pixels associated with individual cells. The segmentation results for a time-lapse image stack were compared against 238 manually segmented images (8219 cells) provided by experts, which we consider as reference data. We chose two metrics to measure the accuracy of segmentation: the 'Adjusted Rand Index' which compares similarities at a pixel level between masks resulting from manual and automated segmentation, and the 'Number of Cells per Field' (NCF) which compares the number of cells identified in the field by manual versus automated analysis. Our results show that the automated segmentation compared to manual segmentation has an average adjusted rand index of 0.96 (1 being a perfect match), with a standard deviation of 0.03, and an average difference of the two numbers of cells per field equal to 5.39% with a standard deviation of 4.6%.
%B Journal of microscopy
%V 249
%P 41-52
%8 2013 Jan
%G eng
%N 1
%1 http://www.ncbi.nlm.nih.gov/pubmed/23126432?dopt=Abstract
%R 10.1111/j.1365-2818.2012.03678.x
%0 Journal Article
%J IEEE Transactions on Image Processing
%D 2010
%T Robust Height Estimation of Moving Objects From Uncalibrated Videos
%A Jie Shao
%A Zhou,S. K
%A Chellapa, Rama
%K algorithms
%K Biometry
%K Calibration
%K EM algorithm
%K geometric properties
%K Geometry
%K Image Enhancement
%K Image Interpretation, Computer-Assisted
%K Imaging, Three-Dimensional
%K least median of squares
%K least squares approximations
%K MOTION
%K motion information
%K multiframe measurements
%K Pattern Recognition, Automated
%K Reproducibility of results
%K Robbins-Monro stochastic approximation
%K robust height estimation
%K Sensitivity and Specificity
%K Signal Processing, Computer-Assisted
%K stochastic approximation
%K Subtraction Technique
%K tracking data
%K uncalibrated stationary camera
%K uncalibrated videos
%K uncertainty analysis
%K vanishing point
%K video metrology
%K Video Recording
%K video signal processing
%X This paper presents an approach for video metrology. From videos acquired by an uncalibrated stationary camera, we first recover the vanishing line and the vertical point of the scene based upon tracking moving objects that primarily lie on a ground plane. Using geometric properties of moving objects, a probabilistic model is constructed for simultaneously grouping trajectories and estimating vanishing points. Then we apply a single view mensuration algorithm to each of the frames to obtain height measurements. We finally fuse the multiframe measurements using the least median of squares (LMedS) as a robust cost function and the Robbins-Monro stochastic approximation (RMSA) technique. This method enables less human supervision, more flexibility and improved robustness. From the uncertainty analysis, we conclude that the method with auto-calibration is robust in practice. Results are shown based upon realistic tracking data from a variety of scenes.
%B IEEE Transactions on Image Processing
%V 19
%P 2221 - 2232
%8 2010/08//
%@ 1057-7149
%G eng
%N 8
%R 10.1109/TIP.2010.2046368
%0 Conference Paper
%B IEEE International Conference on Bioinformatics and Biomedicine, 2009. BIBM '09
%D 2009
%T Inexact Local Alignment Search over Suffix Arrays
%A Ghodsi,M.
%A Pop, Mihai
%K bacteria
%K Bioinformatics
%K biology computing
%K Computational Biology
%K Costs
%K DNA
%K DNA homology searches
%K DNA sequences
%K Educational institutions
%K generalized heuristic
%K genes
%K Genetics
%K genome alignment
%K Genomics
%K human
%K inexact local alignment search
%K inexact seeds
%K local alignment
%K local alignment tools
%K memory efficient suffix array
%K microorganisms
%K molecular biophysics
%K mouse
%K Organisms
%K Sensitivity and Specificity
%K sequences
%K suffix array
%K USA Councils
%X We describe an algorithm for finding approximate seeds for DNA homology searches. In contrast to previous algorithms that use exact or spaced seeds, our approximate seeds may contain insertions and deletions. We present a generalized heuristic for finding such seeds efficiently and prove that the heuristic does not affect sensitivity. We show how to adapt this algorithm to work over the memory efficient suffix array with provably minimal overhead in running time. We demonstrate the effectiveness of our algorithm on two tasks: whole genome alignment of bacteria and alignment of the DNA sequences of 177 genes that are orthologous in human and mouse. We show our algorithm achieves better sensitivity and uses less memory than other commonly used local alignment tools.
%B IEEE International Conference on Bioinformatics and Biomedicine, 2009. BIBM '09
%I IEEE
%P 83 - 87
%8 2009/11/01/4
%@ 978-0-7695-3885-3
%G eng
%R 10.1109/BIBM.2009.25
%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 2006
%T MCMC Data Association and Sparse Factorization Updating for Real Time Multitarget Tracking with Merged and Multiple Measurements
%A Zia Khan
%A Balch, T.
%A Dellaert, F.
%K algorithms
%K approximate inference
%K Artificial intelligence
%K auxiliary variable particle filter
%K Computational efficiency
%K continuous state space
%K downdating
%K Image Enhancement
%K Image Interpretation, Computer-Assisted
%K Inference algorithms
%K Information Storage and Retrieval
%K laser range scanner
%K laser range scanner.
%K Least squares approximation
%K least squares approximations
%K Least squares methods
%K linear least squares
%K Markov chain Monte Carlo
%K Markov processes
%K MCMC data association
%K merged measurements
%K Monte Carlo methods
%K Movement
%K multiple merged measurements
%K multitarget tracking
%K particle filter
%K particle filtering (numerical methods)
%K Particle filters
%K Pattern Recognition, Automated
%K probabilistic model
%K QR factorization
%K Radar tracking
%K Rao-Blackwellized
%K real time multitarget tracking
%K Reproducibility of results
%K Sampling methods
%K Sensitivity and Specificity
%K sensor fusion
%K sparse factorization updating
%K sparse least squares
%K State-space methods
%K Subtraction Technique
%K target tracking
%K updating
%X In several multitarget tracking applications, a target may return more than one measurement per target and interacting targets may return multiple merged measurements between targets. Existing algorithms for tracking and data association, initially applied to radar tracking, do not adequately address these types of measurements. Here, we introduce a probabilistic model for interacting targets that addresses both types of measurements simultaneously. We provide an algorithm for approximate inference in this model using a Markov chain Monte Carlo (MCMC)-based auxiliary variable particle filter. We Rao-Blackwellize the Markov chain to eliminate sampling over the continuous state space of the targets. A major contribution of this work is the use of sparse least squares updating and downdating techniques, which significantly reduce the computational cost per iteration of the Markov chain. Also, when combined with a simple heuristic, they enable the algorithm to correctly focus computation on interacting targets. We include experimental results on a challenging simulation sequence. We test the accuracy of the algorithm using two sensor modalities, video, and laser range data. We also show the algorithm exhibits real time performance on a conventional PC
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V 28
%P 1960 - 1972
%8 2006/12//
%@ 0162-8828
%G eng
%N 12
%0 Journal Article
%J IEEE Transactions on Pattern Analysis and Machine Intelligence
%D 2005
%T Motion segmentation using occlusions
%A Ogale, A. S
%A FermÃ¼ller, Cornelia
%A Aloimonos, J.
%K 3D motion estimation
%K algorithms
%K Artificial intelligence
%K CAMERAS
%K Computer vision
%K Filling
%K hidden feature removal
%K Image Enhancement
%K Image Interpretation, Computer-Assisted
%K image motion
%K Image motion analysis
%K Image segmentation
%K Layout
%K MOTION
%K Motion detection
%K Motion estimation
%K motion segmentation
%K Movement
%K Object detection
%K occlusion
%K occlusions
%K optical flow
%K ordinal depth
%K Pattern Recognition, Automated
%K Photography
%K Reproducibility of results
%K segmentation
%K Semiconductor device modeling
%K Sensitivity and Specificity
%K video analysis.
%K Video Recording
%X We examine the key role of occlusions in finding independently moving objects instantaneously in a video obtained by a moving camera with a restricted field of view. In this problem, the image motion is caused by the combined effect of camera motion (egomotion), structure (depth), and the independent motion of scene entities. For a camera with a restricted field of view undergoing a small motion between frames, there exists, in general, a set of 3D camera motions compatible with the observed flow field even if only a small amount of noise is present, leading to ambiguous 3D motion estimates. If separable sets of solutions exist, motion-based clustering can detect one category of moving objects. Even if a single inseparable set of solutions is found, we show that occlusion information can be used to find ordinal depth, which is critical in identifying a new class of moving objects. In order to find ordinal depth, occlusions must not only be known, but they must also be filled (grouped) with optical flow from neighboring regions. We present a novel algorithm for filling occlusions and deducing ordinal depth under general circumstances. Finally, we describe another category of moving objects which is detected using cardinal comparisons between structure from motion and structure estimates from another source (e.g., stereo).
%B IEEE Transactions on Pattern Analysis and Machine Intelligence
%V 27
%P 988 - 992
%8 2005/06//
%@ 0162-8828
%G eng
%N 6
%R 10.1109/TPAMI.2005.123
%0 Journal Article
%J IEEE Transactions on Image Processing
%D 2004
%T Visual tracking and recognition using appearance-adaptive models in particle filters
%A Zhou,Shaohua Kevin
%A Chellapa, Rama
%A Moghaddam, B.
%K adaptive filters
%K adaptive noise variance
%K algorithms
%K appearance-adaptive model
%K Artificial intelligence
%K Cluster Analysis
%K Computer Graphics
%K Computer simulation
%K Feedback
%K Filtering
%K first-order linear predictor
%K hidden feature removal
%K HUMANS
%K Image Enhancement
%K Image Interpretation, Computer-Assisted
%K image recognition
%K Information Storage and Retrieval
%K Kinematics
%K Laboratories
%K Male
%K Models, Biological
%K Models, Statistical
%K MOTION
%K Movement
%K Noise robustness
%K Numerical Analysis, Computer-Assisted
%K occlusion analysis
%K Particle filters
%K Particle tracking
%K Pattern Recognition, Automated
%K Predictive models
%K Reproducibility of results
%K robust statistics
%K Sensitivity and Specificity
%K Signal Processing, Computer-Assisted
%K State estimation
%K statistical analysis
%K Subtraction Technique
%K tracking
%K Training data
%K visual recognition
%K visual tracking
%X We present an approach that incorporates appearance-adaptive models in a particle filter to realize robust visual tracking and recognition algorithms. Tracking needs modeling interframe motion and appearance changes, whereas recognition needs modeling appearance changes between frames and gallery images. In conventional tracking algorithms, the appearance model is either fixed or rapidly changing, and the motion model is simply a random walk with fixed noise variance. Also, the number of particles is typically fixed. All these factors make the visual tracker unstable. To stabilize the tracker, we propose the following modifications: an observation model arising from an adaptive appearance model, an adaptive velocity motion model with adaptive noise variance, and an adaptive number of particles. The adaptive-velocity model is derived using a first-order linear predictor based on the appearance difference between the incoming observation and the previous particle configuration. Occlusion analysis is implemented using robust statistics. Experimental results on tracking visual objects in long outdoor and indoor video sequences demonstrate the effectiveness and robustness of our tracking algorithm. We then perform simultaneous tracking and recognition by embedding them in a particle filter. For recognition purposes, we model the appearance changes between frames and gallery images by constructing the intra- and extrapersonal spaces. Accurate recognition is achieved when confronted by pose and view variations.
%B IEEE Transactions on Image Processing
%V 13
%P 1491 - 1506
%8 2004/11//
%@ 1057-7149
%G eng
%N 11
%R 10.1109/TIP.2004.836152