%0 Conference Paper
%B Information Forensics and Security (WIFS), 2010 IEEE International Workshop on
%D 2010
%T Semi non-intrusive training for cell-phone camera model linkage
%A Chuang,Wei-Hong
%A M. Wu
%K accuracy;training
%K analysis;cameras;cellular
%K analysis;image
%K camera
%K Color
%K colour
%K complexity;training
%K content
%K dependency;variance
%K feature;cell
%K forensics;digital
%K forensics;image
%K image
%K Interpolation
%K linkage;component
%K matching;interpolation;
%K matching;semi
%K model
%K nonintrusive
%K phone
%K radio;computer
%K training;testing
%X This paper presents a study of cell-phone camera model linkage that matches digital images against potential makes / models of cell-phone camera sources using camera color interpolation features. The matching performance is examined and the dependency on the content of training image collection is evaluated via variance analysis. Training content dependency can be dealt with under the framework of component forensics, where cell-phone camera model linkage is seen as a combination of semi non-intrusive training and completely non-intrusive testing. Such a viewpoint suggests explicitly the goodness criterion of testing accuracy for training data selection. It also motivates other possible alternative training procedures based on different criteria, such as the training complexity, for which preliminary but promising experiment designs and results have been obtained.
%B Information Forensics and Security (WIFS), 2010 IEEE International Workshop on
%P 1 - 6
%8 2010/12//
%G eng
%R 10.1109/WIFS.2010.5711468
%0 Conference Paper
%B Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
%D 2009
%T Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos
%A Gupta,A.
%A Srinivasan,P.
%A Shi,Jianbo
%A Davis, Larry S.
%K (artificial
%K action
%K activity
%K analysis;integer
%K AND-OR
%K annotation;video
%K coding;
%K constraint;video
%K construction;semantic
%K extraction;graph
%K framework;plots
%K graph;encoding;human
%K grounded
%K intelligence);spatiotemporal
%K learning
%K meaning;spatio-temporal
%K model
%K phenomena;video
%K Programming
%K programming;learning
%K recognition;human
%K representation;integer
%K storyline
%K theory;image
%K understanding;visually
%X Analyzing videos of human activities involves not only recognizing actions (typically based on their appearances), but also determining the story/plot of the video. The storyline of a video describes causal relationships between actions. Beyond recognition of individual actions, discovering causal relationships helps to better understand the semantic meaning of the activities. We present an approach to learn a visually grounded storyline model of videos directly from weakly labeled data. The storyline model is represented as an AND-OR graph, a structure that can compactly encode storyline variation across videos. The edges in the AND-OR graph correspond to causal relationships which are represented in terms of spatio-temporal constraints. We formulate an Integer Programming framework for action recognition and storyline extraction using the storyline model and visual groundings learned from training data.
%B Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on
%P 2012 - 2019
%8 2009/06//
%G eng
%R 10.1109/CVPR.2009.5206492
%0 Conference Paper
%B Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on
%D 2007
%T Coarse-to-Fine Event Model for Human Activities
%A Cuntoor, N.P.
%A Chellapa, Rama
%K action
%K activities;spatial
%K airport
%K browsing;video
%K dataset;activity
%K dataset;UCF
%K event
%K framework;human
%K human
%K indoor
%K Markov
%K model
%K model;event
%K models;image
%K probabilities;hidden
%K processing;
%K recognition;coarse-to-fine
%K reduction;video
%K representation;image
%K resolution
%K resolution;image
%K sequences;hidden
%K sequences;stability;video
%K signal
%K Surveillance
%K tarmac
%K TSA
%X We analyze coarse-to-fine hierarchical representation of human activities in video sequences. It can be used for efficient video browsing and activity recognition. Activities are modeled using a sequence of instantaneous events. Events in activities can be represented in a coarse-to-fine hierarchy in several ways, i.e., there may not be a unique hierarchical structure. We present five criteria and quantitative measures for evaluating their effectiveness. The criteria are minimalism, stability, consistency, accessibility and applicability. It is desirable to develop activity models that rank highly on these criteria at all levels of hierarchy. In this paper, activities are represented as sequence of event probabilities computed using the hidden Markov model framework. Two aspects of hierarchies are analyzed: the effect of reduced frame rate on the accuracy of events detected at a finer scale; and the effect of reduced spatial resolution on activity recognition. Experiments using the UCF indoor human action dataset and the TSA airport tarmac surveillance dataset show encouraging results
%B Acoustics, Speech and Signal Processing, 2007. ICASSP 2007. IEEE International Conference on
%V 1
%P I-813 -I-816 - I-813 -I-816
%8 2007/04//
%G eng
%R 10.1109/ICASSP.2007.366032
%0 Conference Paper
%B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on
%D 2007
%T Learning Higher-order Transition Models in Medium-scale Camera Networks
%A Farrell,R.
%A David Doermann
%A Davis, Larry S.
%K (artificial
%K approach;medium-scale
%K association
%K Bayesian
%K camera
%K cameras;video
%K framework;data
%K fusion;iterative
%K graphical
%K intelligence);optical
%K learning;incremental
%K likelihood;multicamera
%K likely
%K methods;higher
%K methods;learning
%K model
%K model;video
%K movement;probabilistic
%K network;Bayes
%K network;most
%K order
%K partition
%K problem;higher-order
%K statistics;higher-order
%K statistics;image
%K Surveillance
%K surveillance;
%K tracking;object
%K tracking;probability;video
%K transition
%X We present a Bayesian framework for learning higher- order transition models in video surveillance networks. Such higher-order models describe object movement between cameras in the network and have a greater predictive power for multi-camera tracking than camera adjacency alone. These models also provide inherent resilience to camera failure, filling in gaps left by single or even multiple non-adjacent camera failures. Our approach to estimating higher-order transition models relies on the accurate assignment of camera observations to the underlying trajectories of objects moving through the network. We addresses this data association problem by gathering the observations and evaluating alternative partitions of the observation set into individual object trajectories. Searching the complete partition space is intractable, so an incremental approach is taken, iteratively adding observations and pruning unlikely partitions. Partition likelihood is determined by the evaluation of a probabilistic graphical model. When the algorithm has considered all observations, the most likely (MAP) partition is taken as the true object trajectories. From these recovered trajectories, the higher-order statistics we seek can be derived and employed for tracking. The partitioning algorithm we present is parallel in nature and can be readily extended to distributed computation in medium-scale smart camera networks.
%B Computer Vision, 2007. ICCV 2007. IEEE 11th International Conference on
%P 1 - 8
%8 2007/10//
%G eng
%R 10.1109/ICCV.2007.4409203
%0 Conference Paper
%B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on
%D 2004
%T 3D model refinement using surface-parallax
%A Agrawala, Ashok K.
%A Chellapa, Rama
%K 3D
%K adaptive
%K arbitrary
%K camera
%K coarse
%K compensation;
%K Computer
%K DEM;
%K depth
%K digital
%K ELEVATION
%K environments;
%K epipolar
%K estimation;
%K field;
%K image
%K incomplete
%K INTENSITY
%K map;
%K model
%K MOTION
%K parallax;
%K plane-parallax
%K reconstruction;
%K recovery;
%K refinement;
%K sequence;
%K sequences;
%K surface
%K surfaces;
%K urban
%K vision;
%K windowing;
%X We present an approach to update and refine coarse 3D models of urban environments from a sequence of intensity images using surface parallax. This generalizes the plane-parallax recovery methods to surface-parallax using arbitrary surfaces. A coarse and potentially incomplete depth map of the scene obtained from a digital elevation map (DEM) is used as a reference surface which is refined and updated using this approach. The reference depth map is used to estimate the camera motion and the motion of the 3D points on the reference surface is compensated. The resulting parallax, which is an epipolar field, is estimated using an adaptive windowing technique and used to obtain the refined depth map.
%B Acoustics, Speech, and Signal Processing, 2004. Proceedings. (ICASSP '04). IEEE International Conference on
%V 3
%P iii - 285-8 vol.3 - iii - 285-8 vol.3
%8 2004/05//
%G eng
%R 10.1109/ICASSP.2004.1326537
%0 Conference Paper
%B Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on
%D 2004
%T Iterative figure-ground discrimination
%A Zhao, L.
%A Davis, Larry S.
%K algorithm;
%K analysis;
%K Bandwidth
%K calculation;
%K Color
%K colour
%K Computer
%K density
%K dimensional
%K discrimination;
%K distribution;
%K distributions;
%K Estimation
%K estimation;
%K expectation
%K figure
%K Gaussian
%K ground
%K image
%K initialization;
%K iterative
%K Kernel
%K low
%K methods;
%K mixture;
%K model
%K model;
%K nonparametric
%K parameter
%K parametric
%K processes;
%K sampling
%K sampling;
%K segmentation
%K segmentation;
%K statistics;
%K theory;
%K vision;
%X Figure-ground discrimination is an important problem in computer vision. Previous work usually assumes that the color distribution of the figure can be described by a low dimensional parametric model such as a mixture of Gaussians. However, such approach has difficulty selecting the number of mixture components and is sensitive to the initialization of the model parameters. In this paper, we employ non-parametric kernel estimation for color distributions of both the figure and background. We derive an iterative sampling-expectation (SE) algorithm for estimating the color, distribution and segmentation. There are several advantages of kernel-density estimation. First, it enables automatic selection of weights of different cues based on the bandwidth calculation from the image itself. Second, it does not require model parameter initialization and estimation. The experimental results on images of cluttered scenes demonstrate the effectiveness of the proposed algorithm.
%B Pattern Recognition, 2004. ICPR 2004. Proceedings of the 17th International Conference on
%V 1
%P 67 - 70 Vol.1 - 67 - 70 Vol.1
%8 2004/08//
%G eng
%R 10.1109/ICPR.2004.1334006
%0 Journal Article
%J Knowledge and Data Engineering, IEEE Transactions on
%D 2004
%T Optimal models of disjunctive logic programs: semantics, complexity, and computation
%A Leone,N.
%A Scarcello,F.
%A V.S. Subrahmanian
%K complexity;
%K computational
%K disjunctive
%K function;
%K knowledge
%K LANGUAGE
%K Logic
%K minimal
%K model
%K nonmonotonic
%K objective
%K optimisation;
%K OPTIMIZATION
%K problems;
%K program
%K Programming
%K programming;
%K reasoning;
%K representation;
%K semantics;
%K stable
%K user-specified
%X Almost all semantics for logic programs with negation identify a set, SEM(P), of models of program P, as the intended semantics of P, and any model M in this class is considered a possible meaning of P with regard to the semantics the user has in mind. Thus, for example, in the case of stable models [M. Gelfond et al., (1988)], choice models [D. Sacca et al., (1990)], answer sets [M. Gelfond et al., (1991)], etc., different possible models correspond to different ways of "completing" the incomplete information in the logic program. However, different end-users may have different ideas on which of these different models in SEM(P) is a reasonable one from their point of view. For instance, given SEM(P), user U_{1} may prefer model M_{1} isin;SEM(P) to model M_{2} isin;SEM(P) based on some evaluation criterion that she has. We develop a logic program semantics based on optimal models. This semantics does not add yet another semantics to the logic programming arena - it takes as input an existing semantics SEM(P) and a user-specified objective function Obj, and yields a new semantics Opt(P)_ sube; SEM(P) that realizes the objective function within the framework of preferred models identified already by SEM(P). Thus, the user who may or may not know anything about logic programming has considerable flexibility in making the system reflect her own objectives by building "on top" of existing semantics known to the system. In addition to the declarative semantics, we provide a complete complexity analysis and algorithms to compute optimal models under varied conditions when SEM(P) is the stable model semantics, the minimal models semantics, and the all-models semantics.
%B Knowledge and Data Engineering, IEEE Transactions on
%V 16
%P 487 - 503
%8 2004/04//
%@ 1041-4347
%G eng
%N 4
%R 10.1109/TKDE.2004.1269672
%0 Conference Paper
%B Image Processing, 2004. ICIP '04. 2004 International Conference on
%D 2004
%T Robust ego-motion estimation and 3D model refinement using depth based parallax model
%A Agrawala, Ashok K.
%A Chellapa, Rama
%K 3D
%K algorithm;
%K analysis;
%K and
%K based
%K camera;
%K coarse
%K compensation;
%K DEM;
%K depth
%K digital
%K ego-motion
%K eigen-value
%K eigenfunctions;
%K eigenvalues
%K ELEVATION
%K epipolar
%K estimation;
%K extraction;
%K feature
%K field;
%K iteration
%K iterative
%K map;
%K method;
%K methods;
%K model
%K model;
%K MOTION
%K parallax
%K partial
%K range-finding;
%K refinement;
%K refining;
%K surface
%X We present an iterative algorithm for robustly estimating the ego-motion and refining and updating a coarse, noisy and partial depth map using a depth based parallax model and brightness derivatives extracted from an image pair. Given a coarse, noisy and partial depth map acquired by a range-finder or obtained from a Digital Elevation Map (DFM), we first estimate the ego-motion by combining a global ego-motion constraint and a local brightness constancy constraint. Using the estimated camera motion and the available depth map estimate, motion of the 3D points is compensated. We utilize the fact that the resulting surface parallax field is an epipolar field and knowing its direction from the previous motion estimates, estimate its magnitude and use it to refine the depth map estimate. Instead of assuming a smooth parallax field or locally smooth depth models, we locally model the parallax magnitude using the depth map, formulate the problem as a generalized eigen-value analysis and obtain better results. In addition, confidence measures for depth estimates are provided which can be used to remove regions with potentially incorrect (and outliers in) depth estimates for robustly estimating ego-motion in the next iteration. Results on both synthetic and real examples are presented.
%B Image Processing, 2004. ICIP '04. 2004 International Conference on
%V 4
%P 2483 - 2486 Vol. 4 - 2483 - 2486 Vol. 4
%8 2004/10//
%G eng
%R 10.1109/ICIP.2004.1421606
%0 Conference Paper
%B Pattern Recognition, 2002. Proceedings. 16th International Conference on
%D 2002
%T Page classification through logical labelling
%A Liang,Jian
%A David Doermann
%A Ma,M.
%A Guo,J. K
%K article
%K attributed
%K base;
%K character
%K classification;
%K constraints;
%K document
%K document;
%K experimental
%K global
%K graph
%K graph;
%K hierarchical
%K image
%K images;
%K labelling;
%K logical
%K model
%K noise;
%K OCR;
%K optical
%K page
%K pages;
%K processing;
%K recognition;
%K relational
%K results;
%K technical
%K theory;
%K title
%K unknown
%X We propose an integrated approach to page classification and logical labelling. Layout is represented by a fully connected attributed relational graph that is matched to the graph of an unknown document, achieving classification and labelling simultaneously. By incorporating global constraints in an integrated fashion, ambiguity at the zone level can be reduced, providing robustness to noise and variation. Models are automatically trained from sample documents. Experimental results show promise for the classification and labelling of technical article title pages, and supports the idea of a hierarchical model base.
%B Pattern Recognition, 2002. Proceedings. 16th International Conference on
%V 3
%P 477 - 480 vol.3 - 477 - 480 vol.3
%8 2002///
%G eng
%R 10.1109/ICPR.2002.1047980
%0 Conference Paper
%B Multimedia Signal Processing, 2002 IEEE Workshop on
%D 2002
%T Wide baseline image registration using prior information
%A Chowdhury, AM
%A Chellapa, Rama
%A Keaton, T.
%K 2D
%K 3D
%K algorithm;
%K alignment;
%K angles;
%K baseline
%K Computer
%K configuration;
%K constellation;
%K correspondence
%K creation;
%K doubly
%K error
%K extraction;
%K Face
%K feature
%K global
%K holistic
%K image
%K images;
%K matching;
%K matrix;
%K model
%K models;
%K normalization
%K panoramic
%K probability;
%K procedure;
%K processes;
%K processing;
%K registration;
%K robust
%K sequences;
%K SHAPE
%K signal
%K Sinkhorn
%K spatial
%K statistics;
%K stereo;
%K Stochastic
%K video
%K view
%K viewing
%K vision;
%K wide
%X Establishing correspondence between features in two images of the same scene taken from different viewing angles in a challenging problem in image processing and computer vision. However, its solution is an important step in many applications like wide baseline stereo, 3D model alignment, creation of panoramic views etc. In this paper, we propose a technique for registration of two images of a face obtained from different viewing angles. We show that prior information about the general characteristics of a face obtained from video sequences of different faces can be used to design a robust correspondence algorithm. The method works by matching 2D shapes of the different features of the face. A doubly stochastic matrix, representing the probability of match between the features, is derived using the Sinkhorn normalization procedure. The final correspondence is obtained by minimizing the probability of error of a match between the entire constellations of features in the two sets, thus taking into account the global spatial configuration of the features. The method is applied for creating holistic 3D models of a face from partial representations. Although this paper focuses primarily on faces, the algorithm can also be used for other objects with small modifications.
%B Multimedia Signal Processing, 2002 IEEE Workshop on
%P 37 - 40
%8 2002/12//
%G eng
%R 10.1109/MMSP.2002.1203242
%0 Conference Paper
%B Visualization '96. Proceedings.
%D 1996
%T Optimizing triangle strips for fast rendering
%A Evans,F.
%A Skiena,S.
%A Varshney, Amitabh
%K buffer
%K data;triangulated
%K disciplines;rendering
%K model
%K models;polygonal
%K optimisation;triangulated
%K partitioning;queuing
%K rendering;graphics
%K sizes;fast
%K strip
%K subsystem;interactive
%K surfaces;data
%K times;scientific
%K triangulated
%K visualisation;
%K visualization;partially
%K visualization;triangle
%X Almost all scientific visualization involving surfaces is currently done via triangles. The speed at which such triangulated surfaces can be displayed is crucial to interactive visualization and is bounded by the rate at which triangulated data can be sent to the graphics subsystem for rendering. Partitioning polygonal models into triangle strips can significantly reduce rendering times over transmitting each triangle individually. We present new and efficient algorithms for constructing triangle strips from partially triangulated models, and experimental results showing these strips are on average 15% better than those from previous codes. Further, we study the impact of larger buffer sizes and various queuing disciplines on the effectiveness of triangle strips.
%B Visualization '96. Proceedings.
%P 319 - 326
%8 1996/11/27/1
%G eng
%R 10.1109/VISUAL.1996.568125
%0 Journal Article
%J Pattern Analysis and Machine Intelligence, IEEE Transactions on
%D 1996
%T The space requirements of indexing under perspective projections
%A Jacobs, David W.
%K 2D
%K complexity;feature
%K complexity;table
%K extraction;image
%K hashing;indexing
%K image
%K images;3D
%K lookup;
%K lookup;computational
%K matching;geometric
%K matching;indexing;object
%K model
%K points;feature
%K process;invariants;object
%K processing;table
%K projections;space
%K recognition;perspective
%K recognition;stereo
%X Object recognition systems can be made more efficient through the use of table lookup to match features. The cost of this indexing process depends on the space required to represent groups of model features in such a lookup table. We determine the space required to perform indexing of arbitrary sets of 3D model points for lookup from a single 2D image formed under perspective projection. We show that in this case, one must use a 3D surface to represent model groups, and we provide an analytic description of such a surface. This is in contrast to the cases of scaled-orthographic or affine projection, in which only a 2D surface is required to represent a group of model features. This demonstrates a fundamental way in which the recognition of objects under perspective projection is more complex than is recognition under other projection models
%B Pattern Analysis and Machine Intelligence, IEEE Transactions on
%V 18
%P 330 - 333
%8 1996/03//
%@ 0162-8828
%G eng
%N 3
%R 10.1109/34.485561
%0 Conference Paper
%B Computer Vision and Pattern Recognition, 1993. Proceedings CVPR '93., 1993 IEEE Computer Society Conference on
%D 1993
%T 2D images of 3-D oriented points
%A Jacobs, David W.
%K 2D
%K 3-D
%K database
%K derivation;
%K image
%K images;
%K indexing;
%K linear
%K model
%K nonrigid
%K oriented
%K points;
%K processing;
%K recovery;
%K structure-form-motion
%K structure-from-motion
%K transformation;
%X A number of vision problems have been shown to become simpler when one models projection from 3-D to 2-D as a nonrigid linear transformation. These results have been largely restricted to models and scenes that consist only of 3-D points. It is shown that, with this projection model, several vision tasks become fundamentally more complex in the somewhat more complicated domain of oriented points. More space is required for indexing models in a database, more images are required to derive structure from motion, and new views of an object cannot be synthesized linearly from old views
%B Computer Vision and Pattern Recognition, 1993. Proceedings CVPR '93., 1993 IEEE Computer Society Conference on
%P 226 - 232
%8 1993/06//
%G eng
%R 10.1109/CVPR.1993.340985
%0 Conference Paper
%B Document Analysis and Recognition, 1993., Proceedings of the Second International Conference on
%D 1993
%T The processing of form documents
%A David Doermann
%A Rosenfeld, A.
%K AUTOMATIC
%K business
%K detectors;
%K document
%K documents;
%K extraction;
%K feature
%K form
%K forms;
%K generation;
%K generic
%K handling;
%K known
%K markings;
%K model
%K modeling;
%K non-form
%K optimal
%K properties;
%K reconstruction;
%K set;
%K specialized
%K stroke
%K width
%X An overview of an approach to the generic modeling and processing of known forms is presented. The system provides a methodology by which models are generated from regions in the document based on their usage. Automatic extraction of an optimal set of features to be used for registration is proposed, and it is shown how specialized detectors can be designed for each feature based on their position, orientation and width properties. Registration of the form with the model is accomplished using probing to establish correspondence. Form components which are corrupted by markings are detected and isolated, the intersections are interpreted and the properties of the non-form markings are used to reconstruct the strokes through the intersections. The feasibility of these ideas is demonstrated through an implementation of key components of the system
%B Document Analysis and Recognition, 1993., Proceedings of the Second International Conference on
%P 497 - 501
%8 1993/10//
%G eng
%R 10.1109/ICDAR.1993.395687
%0 Conference Paper
%B Computer Vision and Pattern Recognition, 1992. Proceedings CVPR '92., 1992 IEEE Computer Society Conference on
%D 1992
%T Space efficient 3-D model indexing
%A Jacobs, David W.
%K 3-D
%K error;table
%K features;sensing
%K indexing;3-D
%K lookup;
%K lookup;image
%K model
%K point
%K processing;table
%X It is shown that the set of 2-D images produced by a group of 3-D point features of a rigid model can be optimally represented with two lines in two high-dimensional spaces. This result is used to match images and model groups by table lookup. The table is efficiently built and accessed through analytic methods that account for the effect of sensing error. In real images, it reduces the set of potential matches by a factor of several thousand. This representation of a model's images is used to analyze two other approaches to recognition. It is determined when invariants exist in several domains, and it is shown that there is an infinite set of qualitatively similar nonaccidental properties
%B Computer Vision and Pattern Recognition, 1992. Proceedings CVPR '92., 1992 IEEE Computer Society Conference on
%P 439 - 444
%8 1992/06//
%G eng
%R 10.1109/CVPR.1992.223153
%0 Journal Article
%J Pattern Analysis and Machine Intelligence, IEEE Transactions on
%D 1991
%T Space and time bounds on indexing 3D models from 2D images
%A Clemens,D. T
%A Jacobs, David W.
%K 2D
%K bounds;time
%K bounds;visual
%K extraction;grouping
%K features;model
%K features;model-based
%K images;3D
%K indexing;feature
%K model
%K operation;image
%K pattern
%K picture
%K processing;
%K recognition
%K recognition;computerised
%K recognition;space
%K systems;computerised
%X Model-based visual recognition systems often match groups of image features to groups of model features to form initial hypotheses, which are then verified. In order to accelerate recognition considerably, the model groups can be arranged in an index space (hashed) offline such that feasible matches are found by indexing into this space. For the case of 2D images and 3D models consisting of point features, bounds on the space required for indexing and on the speedup that such indexing can achieve are demonstrated. It is proved that, even in the absence of image error, each model must be represented by a 2D surface in the index space. This places an unexpected lower bound on the space required to implement indexing and proves that no quantity is invariant for all projections of a model into the image. Theoretical bounds on the speedup achieved by indexing in the presence of image error are also determined, and an implementation of indexing for measuring this speedup empirically is presented. It is found that indexing can produce only a minimal speedup on its own. However, when accompanied by a grouping operation, indexing can provide significant speedups that grow exponentially with the number of features in the groups
%B Pattern Analysis and Machine Intelligence, IEEE Transactions on
%V 13
%P 1007 - 1017
%8 1991/10//
%@ 0162-8828
%G eng
%N 10
%R 10.1109/34.99235