TY - JOUR
T1 - Special Issue on Video Analysis on Resource-Limited Systems
JF - IEEE Transactions on Circuits and Systems for Video Technology
Y1 - 2011
A1 - Chellapa, Rama
A1 - Cavallaro, A.
A1 - Wu,Y.
A1 - Shan, C.
A1 - Fu, Y.
A1 - Pulli, K.
KW - computational complexity
KW - Image Enhancement
KW - Special issues and sections
KW - Video compression
AB - The 17 papers in this special issue focus on resource-limited systems.
VL - 21
SN - 1051-8215
CP - 10
M3 - 10.1109/TCSVT.2011.2165795
ER -
TY - CONF
T1 - New Constructive Aspects of the Lovasz Local Lemma
T2 - 2010 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS)
Y1 - 2010
A1 - Haeupler,B.
A1 - Saha,B.
A1 - Srinivasan, Aravind
KW - acyclic edge coloring
KW - Algorithm design and analysis
KW - Approximation algorithms
KW - Approximation methods
KW - computational complexity
KW - Computer science
KW - constant factor approximation algorithm
KW - graph colouring
KW - Linearity
KW - Lovasz Local Lemma
KW - MAX k-SAT
KW - Monte Carlo Algorithm
KW - Monte Carlo methods
KW - Moser-Tardos algorithm
KW - nonrepetitive graph coloring
KW - output distribution
KW - polynomial sized core subset
KW - Polynomials
KW - Probabilistc Method
KW - probabilistic analysis
KW - probabilistic logic
KW - probability
KW - Ramsey type graph
KW - Sampling methods
KW - Santa Claus Problem
AB - The Lov'{a}sz Local Lemma (LLL) is a powerful tool that gives sufficient conditions for avoiding all of a given set of ``bad'' events, with positive probability. A series of results have provided algorithms to efficiently construct structures whose existence is non-constructively guaranteed by the LLL, culminating in the recent breakthrough of Moser & Tardos. We show that the output distribution of the Moser-Tardos algorithm well-approximates the emph{conditional LLL-distribution} – the distribution obtained by conditioning on all bad events being avoided. We show how a known bound on the probabilities of events in this distribution can be used for further probabilistic analysis and give new constructive and non-constructive results. We also show that when an LLL application provides a small amount of slack, the number of resamplings of the Moser-Tardos algorithm is nearly linear in the number of underlying independent variables (not events!), and can thus be used to give efficient constructions in cases where the underlying proof applies the LLL to super-polynomially many events. Even in cases where finding a bad event that holds is computationally hard, we show that applying the algorithm to avoid a polynomial-sized ``core'' subset of bad events leads to a desired outcome with high probability. We demonstrate this idea on several applications. We give the first constant-factor approximation algorithm for the Santa Claus problem by making an LLL-based proof of Feige constructive. We provide Monte Carlo algorithms for acyclic edge coloring, non-repetitive graph colorings, and Ramsey-type graphs. In all these applications the algorithm falls directly out of the non-constructive LLL-based proof. Our algorithms are very simple, often provide better bounds than previous algorithms, and are in several cases the first efficient algorithms known. As a second type of application we consider settings beyond the critical dependency threshold of the LLL: - - avoiding all bad events is impossible in these cases. As the first (even non-constructive) result of this kind, we show that by sampling from the LLL-distribution of a selected smaller core, we can avoid a fraction of bad events that is higher than the expectation. MAX $k$-SAT is an example of this.
JA - 2010 51st Annual IEEE Symposium on Foundations of Computer Science (FOCS)
PB - IEEE
SN - 978-1-4244-8525-3
M3 - 10.1109/FOCS.2010.45
ER -
TY - CONF
T1 - Towards Dynamic Adaptive Automated Test Generation for Graphical User Interfaces
T2 - Software Testing, Verification and Validation Workshops, 2009. ICSTW '09. International Conference on
Y1 - 2009
A1 - Xun Yuan
A1 - Cohen,M. B
A1 - Memon, Atif M.
KW - adaptive automated test generation
KW - computational complexity
KW - event sequence length
KW - evolutionary algorithm
KW - evolutionary computation
KW - graphical user interface
KW - Graphical user interfaces
KW - GUI test case
KW - program testing
AB - Graphical user interfaces (GUIs) present an enormous number of potential event sequences to users. During testing it is necessary to cover this space, however the complexity of modern GUIs has made this an increasingly difficult task. Our past work has demonstrated that it is important to incorporate "context” into GUI test cases, in terms of event combinations, event sequence length, and by considering all possible starting and ending positions for each event. Despite the use of our most refined modeling techniques, many of the generated test cases remain unexecutable. In this paper, we posit that due to the dynamic state-based nature of GUIs, it is important to incorporate feedback from the execution of tests into test case generation algorithms. We propose the use of an evolutionary algorithm to generate test suites with fewer unexecutable test cases and higher event interaction coverage.
JA - Software Testing, Verification and Validation Workshops, 2009. ICSTW '09. International Conference on
M3 - 10.1109/ICSTW.2009.26
ER -
TY - CONF
T1 - Flow Algorithms for Parallel Query Optimization
T2 - IEEE 24th International Conference on Data Engineering, 2008. ICDE 2008
Y1 - 2008
A1 - Deshpande, Amol
A1 - Hellerstein,L.
KW - Casting
KW - computational complexity
KW - Cost function
KW - Databases
KW - Delay
KW - distributed environment
KW - Educational institutions
KW - flow maximization algorithm
KW - Interleaved codes
KW - interoperator parallelism
KW - minimisation
KW - multiway join query response time minimization problem
KW - parallel database
KW - Parallel databases
KW - parallel query optimization
KW - Partitioning algorithms
KW - pipeline processing
KW - pipelined parallelism
KW - polynomial-time algorithm
KW - query planning problem
KW - Query processing
KW - Web service
KW - Web services
AB - We address the problem of minimizing the response time of a multi-way join query using pipelined (inter-operator) parallelism, in a parallel or a distributed environment. We observe that in order to fully exploit the parallelism in the system, we must consider a new class of ";interleaving"; plans, where multiple query plans are used simultaneously to minimize the response time of a query (or to maximize the tuple-throughput of the system). We cast the query planning problem in this environment as a ";flow maximization problem";, and present polynomial-time algorithms that (statically) find the optimal set of plans to use for a given query, for a large class of multi-way join queries. Our proposed algorithms also naturally extend to query optimization over web services. Finally we present an extensive experimental evaluation that demonstrates both the need to consider such plans in parallel query processing and the effectiveness of our algorithms.
JA - IEEE 24th International Conference on Data Engineering, 2008. ICDE 2008
PB - IEEE
SN - 978-1-4244-1836-7
M3 - 10.1109/ICDE.2008.4497484
ER -
TY - CONF
T1 - Algorithmic graph minor theory: Decomposition, approximation, and coloring
T2 - Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on
Y1 - 2005
A1 - Demaine,E. D
A1 - Hajiaghayi, Mohammad T.
A1 - Kawarabayashi,K.
KW - algorithmic graph minor theory
KW - approximation algorithm
KW - combinatorial polylogarithmic approximation
KW - computational complexity
KW - constant-factor approximation
KW - graph algorithm
KW - graph coloring
KW - graph colouring
KW - half-integral multicommodity flow
KW - largest grid minor
KW - maximization problem
KW - minimization problem
KW - polynomial-time algorithm
KW - subexponential fixed-parameter algorithm
KW - topological graph theory
KW - treewidth
AB - At the core of the seminal graph minor theory of Robertson and Seymour is a powerful structural theorem capturing the structure of graphs excluding a fixed minor. This result is used throughout graph theory and graph algorithms, but is existential. We develop a polynomial-time algorithm using topological graph theory to decompose a graph into the structure guaranteed by the theorem: a clique-sum of pieces almost-embeddable into bounded-genus surfaces. This result has many applications. In particular we show applications to developing many approximation algorithms, including a 2-approximation to graph coloring, constant-factor approximations to treewidth and the largest grid minor, combinatorial polylogarithmic approximation to half-integral multicommodity flow, subexponential fixed-parameter algorithms, and PTASs for many minimization and maximization problems, on graphs excluding a fixed minor.
JA - Foundations of Computer Science, 2005. FOCS 2005. 46th Annual IEEE Symposium on
M3 - 10.1109/SFCS.2005.14
ER -
TY - CONF
T1 - CASPER: an integrated energy-driven approach for task graph scheduling on distributed embedded systems
T2 - Application-Specific Systems, Architecture Processors, 2005. ASAP 2005. 16th IEEE International Conference on
Y1 - 2005
A1 - Kianzad,V.
A1 - Bhattacharyya, Shuvra S.
A1 - Gang Qu
KW - CASPER
KW - combined-assignment-scheduling-and-power-management
KW - computational complexity
KW - distributed system
KW - dynamic voltage scaling
KW - embedded systems
KW - energy reduction
KW - genetic algorithm
KW - Genetic algorithms
KW - graph theory
KW - homogeneous multiprocessor system
KW - maximal energy saving
KW - Multiprocessing systems
KW - multiprocessor embedded system
KW - optimization loop
KW - power management
KW - power variation
KW - Processor scheduling
KW - slack distribution algorithm
KW - task assignment
KW - task graph scheduling
AB - For multiprocessor embedded systems, the dynamic voltage scaling (DVS) technique can be applied to scheduled applications for energy reduction. DVS utilizes slack in the schedule to slow down processes and save energy. Therefore, it is generally believed that the maximal energy saving is achieved on a schedule with the minimum makespan (maximal slack). Most current approaches treat task assignment, scheduling, and DVS separately. In this paper, we present a framework called CASPER (combined assignment, scheduling, and power-management) that challenges this common belief by integrating task scheduling and DVS under a single iterative optimization loop via genetic algorithm. We have conducted extensive experiments to validate the energy efficiency of CASPER. For homogeneous multiprocessor systems (in which all processors are of the same type), we consider a recently proposed slack distribution algorithm (PDP-SPM) by S. Hua and G. Qu (2005): applying PDP-SPM on the schedule with the minimal makespan gives an average of 53.8% energy saving; CASPER finds schedules with slightly larger makespan but a 57.3% energy saving, a 7.8% improvement. For heterogeneous systems, we consider the power variation DVS (PV-DVS) algorithm by Schmitz et al. (2004), CASPER improves its energy efficiency by 8.2%. Finally, our results also show that the proposed single loop CASPER framework saves 23.3% more energy over GMA+EE-GLSA by Schmitz et al. (2002), the only other known integrated approach with a nested loop that combines scheduling and power management in the inner loop but leaves assignment in the outer loop.
JA - Application-Specific Systems, Architecture Processors, 2005. ASAP 2005. 16th IEEE International Conference on
M3 - 10.1109/ASAP.2005.23
ER -
TY - CONF
T1 - A Rank-by-Feature Framework for Unsupervised Multidimensional Data Exploration Using Low Dimensional Projections
T2 - IEEE Symposium on Information Visualization, 2004. INFOVIS 2004
Y1 - 2004
A1 - Seo,J.
A1 - Shneiderman, Ben
KW - axis-parallel projections
KW - boxplot
KW - color-coded lower-triangular matrix
KW - computational complexity
KW - computational geometry
KW - Computer displays
KW - Computer science
KW - Computer vision
KW - Data analysis
KW - data mining
KW - data visualisation
KW - Data visualization
KW - Displays
KW - dynamic query
KW - Educational institutions
KW - exploratory data analysis
KW - feature detection
KW - feature detection/selection
KW - Feature extraction
KW - feature selection
KW - graph theory
KW - graphical displays
KW - histogram
KW - Information Visualization
KW - interactive systems
KW - Laboratories
KW - Multidimensional systems
KW - Principal component analysis
KW - rank-by-feature prism
KW - scatterplot
KW - statistical analysis
KW - statistical graphics
KW - statistical graphs
KW - unsupervised multidimensional data exploration
KW - very large databases
AB - Exploratory analysis of multidimensional data sets is challenging because of the difficulty in comprehending more than three dimensions. Two fundamental statistical principles for the exploratory analysis are (1) to examine each dimension first and then find relationships among dimensions, and (2) to try graphical displays first and then find numerical summaries (D.S. Moore, (1999). We implement these principles in a novel conceptual framework called the rank-by-feature framework. In the framework, users can choose a ranking criterion interesting to them and sort 1D or 2D axis-parallel projections according to the criterion. We introduce the rank-by-feature prism that is a color-coded lower-triangular matrix that guides users to desired features. Statistical graphs (histogram, boxplot, and scatterplot) and information visualization techniques (overview, coordination, and dynamic query) are combined to help users effectively traverse 1D and 2D axis-parallel projections, and finally to help them interactively find interesting features
JA - IEEE Symposium on Information Visualization, 2004. INFOVIS 2004
PB - IEEE
SN - 0-7803-8779-3
M3 - 10.1109/INFVIS.2004.3
ER -
TY - CONF
T1 - Dependent rounding in bipartite graphs
T2 - The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings
Y1 - 2002
A1 - Gandhi,R.
A1 - Khuller, Samir
A1 - Parthasarathy,S.
A1 - Srinivasan, Aravind
KW - Application software
KW - Approximation algorithms
KW - bipartite graph
KW - bipartite graphs
KW - broadcast channels
KW - broadcast scheduling
KW - Broadcasting
KW - capacitated vertex cover
KW - Character generation
KW - computational complexity
KW - Computer science
KW - Delay
KW - edge-sets
KW - Educational institutions
KW - fair scheduling
KW - fractional vectors
KW - graph theory
KW - per-user fairness properties
KW - pipage rounding technique
KW - Processor scheduling
KW - Random variables
KW - random-graph models
KW - randomized rounding approach
KW - rounding method
KW - scheduling
KW - Scheduling algorithm
KW - telecommunication computing
KW - unrelated parallel machines
AB - We combine the pipage rounding technique of Ageev & Sviridenko with a recent rounding method developed by Srinivasan (2001), to develop a new randomized rounding approach for fractional vectors defined on the edge-sets of bipartite graphs. We show various ways of combining this technique with other ideas, leading to the following applications: richer random-graph models for graphs with a given degree-sequence; improved approximation algorithms for: (i) throughput-maximization in broadcast scheduling, (ii) delay-minimization in broadcast scheduling, and (iii) capacitated vertex cover; fair scheduling of jobs on unrelated parallel machines. A useful feature of our method is that it lets us prove certain (probabilistic) per-user fairness properties.
JA - The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings
PB - IEEE
SN - 0-7695-1822-2
M3 - 10.1109/SFCS.2002.1181955
ER -
TY - CONF
T1 - Building dependable distributed applications using AQUA
Y1 - 1999
A1 - Ren,J.
A1 - Cukier, Michel
A1 - Rubel,P.
A1 - Sanders,W. H.
A1 - Bakken,D. E.
A1 - Karr,D. A.
KW - ad hoc methods
KW - application programmer
KW - AQuA
KW - complexities
KW - computational complexity
KW - dependable distributed applications
KW - distributed CORBA application
KW - distributed object management
KW - Fault tolerance
KW - fault tolerant computing
KW - proteus
AB - Building dependable distributed systems using ad hoc methods is a challenging task. Without proper support, an application programmer must face the daunting requirement of having to provide fault tolerance at the application level, in addition to dealing with the complexities of the distributed application itself. This approach requires a deep knowledge of fault tolerance on the part of the application designer, and has a high implementation cost. What is needed is a systematic approach to providing dependability to distributed applications. Proteus, part of the AQuA architecture, fills this need and provides facilities to make a standard distributed CORBA application dependable, with minimal changes to an application. Furthermore, it permits applications to specify, either directly or via the Quality Objects (QuO) infrastructure, the level of dependability they expect of a remote object, and will attempt to configure the system to achieve the requested dependability level. Our previous papers have focused on the architecture and implementation of Proteus. This paper describes how to construct dependable applications using the AQuA architecture, by describing the interface that a programmer is presented with and the graphical monitoring facilities that it provides
M3 - 10.1109/HASE.1999.809494
ER -
TY - CONF
T1 - Hardness of flip-cut problems from optical mapping [DNA molecules application]
T2 - Compression and Complexity of Sequences 1997. Proceedings
Y1 - 1997
A1 - Dancik,V.
A1 - Hannenhalli, Sridhar
A1 - Muthukrishnan,S.
KW - binary shift cut
KW - Biochemistry
KW - Bioinformatics
KW - biological techniques
KW - Biology
KW - biomedical optical imaging
KW - combinatorial mathematics
KW - combinatorial problem
KW - computational complexity
KW - computational problems
KW - DNA
KW - DNA molecule
KW - exclusive binary flip cut
KW - flip-cut problem hardness
KW - Genetic engineering
KW - Genomics
KW - MATHEMATICS
KW - molecular biology
KW - molecular biophysics
KW - molecule orientation
KW - multiple partial restriction maps
KW - NP-complete problem
KW - optical mapping
KW - polynomial time solutions
KW - Polynomials
KW - sequences
AB - Optical mapping is a new technology for constructing restriction maps. Associated computational problems include aligning multiple partial restriction maps into a single “consensus” restriction map, and determining the correct orientation of each molecule, which was formalized as the exclusive binary flip cut (EBFC) problem by Muthukrishnan and Parida (see Proc. of the First ACM Conference on Computational Molecular Biology (RECOMB), Santa Fe, p.209-19, 1997). Here, the authors prove that the EBFC problem, as well as a number of its variants, are NP-complete. They also identify another problem formalized as binary shift cut (BSC) problem motivated by the fact that there might be missing fragments at the beginnings and/or the ends of the molecules, and prove it to be NP-complete. Therefore, they do not have efficient, that is, polynomial time solutions unless P=NP
JA - Compression and Complexity of Sequences 1997. Proceedings
PB - IEEE
SN - 0-8186-8132-2
M3 - 10.1109/SEQUEN.1997.666922
ER -
TY - CONF
T1 - Improved approximations for edge-disjoint paths, unsplittable flow, and related routing problems
T2 - , 38th Annual Symposium on Foundations of Computer Science, 1997. Proceedings
Y1 - 1997
A1 - Srinivasan, Aravind
KW - Approximation algorithms
KW - Bandwidth
KW - Channel allocation
KW - computational complexity
KW - Computer science
KW - edge-disjoint paths
KW - graph theory
KW - High speed integrated circuits
KW - IEL
KW - Image motion analysis
KW - Information systems
KW - multi-commodity flow relaxation
KW - Multiprocessor interconnection networks
KW - network routing
KW - Optical fiber networks
KW - Routing
KW - routing problems
KW - unsplittable flow
AB - We present improved approximation algorithms for a family of problems involving edge-disjoint paths and unsplittable flow, and for some related routing problems. The central theme of all our algorithms is the underlying multi-commodity flow relaxation
JA - , 38th Annual Symposium on Foundations of Computer Science, 1997. Proceedings
PB - IEEE
SN - 0-8186-8197-7
M3 - 10.1109/SFCS.1997.646130
ER -
TY - CONF
T1 - Efficient model checking via the equational μ-calculus
T2 - , Eleventh Annual IEEE Symposium on Logic in Computer Science, 1996. LICS '96. Proceedings
Y1 - 1996
A1 - Bhat,G.
A1 - Cleaveland, Rance
KW - Automata
KW - BDD-based algorithms
KW - Boolean functions
KW - Calculus
KW - compositional model-checking approaches
KW - computational complexity
KW - Computer science
KW - CTL*
KW - Data structures
KW - Encoding
KW - equational variant
KW - equational μ-calculus
KW - Equations
KW - expressive temporal logic
KW - Logic
KW - Maintenance
KW - modal μ-calculus
KW - Model checking
KW - on-the-fly procedures
KW - Surges
KW - temporal logic
KW - temporal logic model checking
KW - unified framework
AB - This paper studies the use of an equational variant of the modal μ-calculus as a unified framework for efficient temporal logic model checking. In particular we show how an expressive temporal logic, CTL*, may be efficiently translated into the μ-calculus. Using this translation, one may then employ μ-calculus model-checking techniques, including on-the-fly procedures, BDD-based algorithms and compositional model-checking approaches, to determine if systems satisfy formulas in CTL*
JA - , Eleventh Annual IEEE Symposium on Logic in Computer Science, 1996. LICS '96. Proceedings
PB - IEEE
SN - 0-8186-7463-6
M3 - 10.1109/LICS.1996.561358
ER -
TY - CONF
T1 - Efficient on-the-fly model checking for CTL
T2 - , Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings
Y1 - 1995
A1 - Bhat,G.
A1 - Cleaveland, Rance
A1 - Grumberg,O.
KW - Algorithm design and analysis
KW - Automata
KW - computational complexity
KW - Computer science
KW - CTL
KW - Encoding
KW - finite automata
KW - finite-state system
KW - global algorithm
KW - Logic
KW - LTL
KW - on-the-fly model checking
KW - Performance analysis
KW - Safety
KW - State-space methods
KW - sublogic
KW - temporal logic
KW - time complexity
AB - This paper gives an on-the-fly algorithm for determining whether a finite-state system satisfies a formula in the temporal logic CTL. The time complexity of our algorithm matches that of the best existing “global algorithm” for model checking in this logic, and it performs as well as the best known global algorithms for the sublogics CTL and LTL. In contrast with these approaches, however, our routine constructs the state space of the system under consideration in a need-driven fashion and will therefore perform better in practice
JA - , Tenth Annual IEEE Symposium on Logic in Computer Science, 1995. LICS '95. Proceedings
PB - IEEE
SN - 0-8186-7050-9
M3 - 10.1109/LICS.1995.523273
ER -
TY - CONF
T1 - Splitters and near-optimal derandomization
T2 - , 36th Annual Symposium on Foundations of Computer Science, 1995. Proceedings
Y1 - 1995
A1 - Naor,M.
A1 - Schulman,L. J
A1 - Srinivasan, Aravind
KW - Boosting
KW - Circuit testing
KW - computational complexity
KW - computational linguistics
KW - Computer science
KW - Contracts
KW - derandomization
KW - deterministic constructions
KW - Educational institutions
KW - Engineering profession
KW - exhaustive testing
KW - fairly general method
KW - fixed-subgraph finding algorithms
KW - hardness of approximation
KW - Information systems
KW - k-restrictions
KW - learning
KW - local-coloring protocol
KW - MATHEMATICS
KW - near-optimal constructions
KW - near-optimal derandomization
KW - Parallel algorithms
KW - probabilistic bound
KW - probability
KW - Protocols
KW - randomised algorithms
KW - Set cover
KW - splitters
AB - We present a fairly general method for finding deterministic constructions obeying what we call k-restrictions; this yields structures of size not much larger than the probabilistic bound. The structures constructed by our method include (n,k)-universal sets (a collection of binary vectors of length n such that for any subset of size k of the indices, all 2k configurations appear) and families of perfect hash functions. The near-optimal constructions of these objects imply the very efficient derandomization of algorithms in learning, of fixed-subgraph finding algorithms, and of near optimal ΣIIΣ threshold formulae. In addition, they derandomize the reduction showing the hardness of approximation of set cover. They also yield deterministic constructions for a local-coloring protocol, and for exhaustive testing of circuits
JA - , 36th Annual Symposium on Foundations of Computer Science, 1995. Proceedings
PB - IEEE
SN - 0-8186-7183-1
M3 - 10.1109/SFCS.1995.492475
ER -
TY - CONF
T1 - Computing with very weak random sources
T2 - , 35th Annual Symposium on Foundations of Computer Science, 1994 Proceedings
Y1 - 1994
A1 - Srinivasan, Aravind
A1 - Zuckerman,D.
KW - Application software
KW - BPP simulations
KW - Chor-Goldreich sources
KW - computational complexity
KW - Computational modeling
KW - Computer science
KW - Computer simulation
KW - cryptography
KW - distributed algorithms
KW - expander constructions
KW - hardness
KW - MATHEMATICS
KW - min-entropy
KW - Physics computing
KW - Polynomials
KW - probability
KW - R-bit string
KW - randomness-efficient Leftover Hash Lemma
KW - RP algorithms simulation
KW - Testing
KW - time-space tradeoffs
KW - very weak random sources
AB - For any fixed ε>0, we show how to simulate RP algorithms in time nO(log n) using the output of a δ-source with min-entropy R(ε). Such a weak random source is asked once for R(ε) bits; it outputs an R-bit string such that any string has probability at most 2-R(ε). If ε>1-1/(k+1), our BPP simulations take time nO(log(k n)) (log(k) is the logarithm iterated k times). We also give a polynomial-time BPP simulation using Chor-Goldreich sources of min-entropy RΩ(1), which is optimal. We present applications to time-space tradeoffs, expander constructions, and the hardness of approximation. Also of interest is our randomness-efficient Leftover Hash Lemma, found independently by Goldreich and Wigderson
JA - , 35th Annual Symposium on Foundations of Computer Science, 1994 Proceedings
PB - IEEE
SN - 0-8186-6580-7
M3 - 10.1109/SFCS.1994.365688
ER -
TY - CONF
T1 - A SIMD solution to the sequence comparison problem on the MGAP
T2 - International Conference on Application Specific Array Processors, 1994. Proceedings
Y1 - 1994
A1 - Borah,M.
A1 - Bajwa,R. S
A1 - Hannenhalli, Sridhar
A1 - Irwin,M. J
KW - AT-optimal algorithm
KW - Biological information theory
KW - biology computing
KW - biosequence comparison problem
KW - computational complexity
KW - Computer science
KW - Costs
KW - database size
KW - Databases
KW - DNA computing
KW - dynamic programming
KW - dynamic programming algorithms
KW - fine-grained massively parallel processor array
KW - Genetics
KW - Heuristic algorithms
KW - maximally similar sequence
KW - MGAP parallel computer
KW - Micro-Grain Array Processor
KW - Military computing
KW - molecular biology
KW - molecular biophysics
KW - Nearest neighbor searches
KW - nearest-neighbor connections
KW - Parallel algorithms
KW - pipeline processing
KW - pipelined SIMD solution
KW - sequence alignment problem
KW - sequences
AB - Molecular biologists frequently compare an unknown biosequence with a set of other known biosequences to find the sequence which is maximally similar, with the hope that what is true of one sequence, either physically or functionally, could be true of its analogue. Even though efficient dynamic programming algorithms exist for the problem, when the size of the database is large, the time required is quite long, even for moderate length sequences. In this paper, we present an efficient pipelined SIMD solution to the sequence alignment problem on the Micro-Grain Array Processor (MGAP), a fine-grained massively parallel array of processors with nearest-neighbor connections. The algorithm compares K sequences of length O(M) with the actual sequence of length N, in O(M+N+K) time with O(MN) processors, which is AT-optimal. The implementation on the MGAP computes at the rate of about 0.1 million comparisons per second for sequences of length 128
JA - International Conference on Application Specific Array Processors, 1994. Proceedings
PB - IEEE
SN - 0-8186-6517-3
M3 - 10.1109/ASAP.1994.331791
ER -
TY - CONF
T1 - A distributed algorithm for ear decomposition
T2 - , Fifth International Conference on Computing and Information, 1993. Proceedings ICCI '93
Y1 - 1993
A1 - Hannenhalli, Sridhar
A1 - Perumalla,K.
A1 - Chandrasekharan,N.
A1 - Sridhar,R.
KW - Asynchronous communication
KW - asynchronous communication network
KW - Automata
KW - Communication networks
KW - computational complexity
KW - Computer networks
KW - Computer science
KW - decomposition graph
KW - distributed algorithm
KW - distributed algorithms
KW - Distributed computing
KW - Ear
KW - ear decomposition
KW - graph theory
KW - message-optimal
KW - network decomposition
KW - sorting
KW - Testing
KW - time-optimal
AB - A distributed algorithm for finding an ear decomposition of an asynchronous communication network with n nodes and m links is presented. At the completion of the algorithm either the ears are correctly labeled or the nodes are informed that there exists no ear decomposition. First we present a novel algorithm to check the existence of an ear decomposition which uses O(m) messages. We also present two other algorithms, one which is time-optimal and the other which is message-optimal to determine the actual ears and their corresponding numbers after determining the existence of an ear decomposition
JA - , Fifth International Conference on Computing and Information, 1993. Proceedings ICCI '93
PB - IEEE
SN - 0-8186-4212-2
M3 - 10.1109/ICCI.1993.315382
ER -
TY - JOUR
T1 - On the difficulty of Manhattan channel routing
JF - Information Processing Letters
Y1 - 1992
A1 - Greenberg,Ronald
A1 - JaJa, Joseph F.
A1 - Krishnamurthy,Sridhar
KW - combinatorial problems
KW - computational complexity
KW - lower bounds
KW - VLSI channel routing
AB - We show that channel routing in the Manhattan model remains difficult even when all nets are single-sided. Given a set of n single-sided nets, we consider the problem of determining the minimum number of tracks required to obtain a dogleg-free routing. In addition to showing that the decision version of the problem isNP-complete, we show that there are problems requiring at least d+Ω(n) tracks, where d is the density. This existential lower bound does not follow from any of the known lower bounds in the literature.
VL - 44
SN - 0020-0190
UR - http://www.sciencedirect.com/science/article/pii/002001909290214G
CP - 5
M3 - 10.1016/0020-0190(92)90214-G
ER -
TY - CONF
T1 - Efficient algorithms for computing matching and chromatic polynomials on series-parallel graphs
T2 - , Fourth International Conference on Computing and Information, 1992. Proceedings. ICCI '92
Y1 - 1992
A1 - Chandrasekharan,N.
A1 - Hannenhalli, Sridhar
KW - chromatic polynomials
KW - computational complexity
KW - Computer science
KW - graph colouring
KW - graph theory
KW - matching polynomial
KW - Polynomials
KW - series-parallel graphs
KW - Terminology
KW - Tree data structures
KW - Tree graphs
AB - The authors present efficient algorithms for computing the matching polynomial and chromatic polynomial of a series-parallel graph in O(n3) and O(n2) time respectively. Their algorithm for computing the matching polynomial generalizes an existing result from Lovasz, Plummer (1986) and the chromatic polynomial algorithm improves the result given by Hunt, Ravi, Stearn (1988) from O(n4) time
JA - , Fourth International Conference on Computing and Information, 1992. Proceedings. ICCI '92
PB - IEEE
SN - 0-8186-2812-X
M3 - 10.1109/ICCI.1992.227709
ER -
TY - CONF
T1 - Towards a theory of nearly constant time parallel algorithms
T2 - Foundations of Computer Science, 1991. Proceedings., 32nd Annual Symposium on
Y1 - 1991
A1 - Gil,J.
A1 - Matias,Y.
A1 - Vishkin, Uzi
KW - computational complexity
KW - Estimation
KW - nearly constant time parallel algorithms
KW - Parallel algorithms
KW - positive numbers
KW - randomization
KW - running time
KW - superfast optimal algorithms
AB - It is demonstrated that randomization is an extremely powerful tool for designing very fast and efficient parallel algorithms. Specifically, a running time of O(lg* n) (nearly-constant), with high probability, is achieved using n/lg* n (optimal speedup) processors for a wide range of fundamental problems. Also given is a constant time algorithm which, using n processors, approximates the sum of n positive numbers to within an error which is smaller than the sum by an order of magnitude. A variety of known and new techniques are used. New techniques, which are of independent interest, include estimation of the size of a set in constant time for several settings, and ways for deriving superfast optimal algorithms from superfast nonoptimal ones
JA - Foundations of Computer Science, 1991. Proceedings., 32nd Annual Symposium on
M3 - 10.1109/SFCS.1991.185438
ER -