James Aspnes, Bernhard Haeupler, Alexander Tong, and Philipp Woelfel. Allocate-on-use space complexity of shared-memory algorithms. Accepted to DISC 2018.
Many fundamental problems in shared-memory distributed computing, including mutual exclusion (Burns and Lynch, 1993), consensus (Zhu, 2016), and implementations of many sequential objects (Jayanti et al., 2000), are known to require linear space in the worst case. However, these lower bounds all work by constructing particular executions for any given algorithm that may be both very long and very improbable. The significance of these bounds is justified by an assumption that any space that is used in some execution must be allocated for all executions. This assumption is not consistent with the storage allocation mechanisms of actual practical systems.
We consider the consequences of adopting a more dynamic approach to space complexity, where an object only counts toward the space complexity if it is used. We describe two basic versions of this measure, an allocate-on-access model in which any operation on an object causes it to be counted, and an allocate-on-update model in which only operations that change the initial default state cause it to be counted. We show how the allocate-on-access model can simulate allocate-on update with overhead logarithmic in the largest address used. We also show that many known randomized algorithms for fundamental problems in shared-memory distributed computing have expected space complexity much lower than the worst-case lower bounds, and that many algorithms that are adaptive in time complexity can also be made adaptive in space complexity. Similarly, some recent lower bounds on expected time complexity for randomized implementations of restricted-use objects can be modified in a straightforward way to get lower bounds on expected space complexity.
For the specific problem of mutual exclusion, we develop a new algorithm that illustrates an apparent trade-off between low expected space complexity and low expected RMR complexity. Whether this trade-off is necessary is an open problem.
@unpublished{AspnesHTW2018, author = {Aspnes, James and Haeupler, Bernhard and Tong, Alexander and Woelfel, Philipp}, title = {Allocate-on-use space complexity of shared-memory algorithms}, month = may, year = 2018, note={Accepted to DISC 2018} }
Yehuda Afek, James Aspnes, Edo Cohen, and Danny Vainstein. Object oriented consensus. In preparation. A brief announcement of a preliminary version of this work appeared in ACM Symposium on Principles of Distributed Computing (PODC 2017), July 2017, pp. 367–369.
Our work focuses on the problem of decomposing consensus algorithms into a common framework composed of simple building blocks. We show that earlier decomposition strategies fall short when applied to some well known algorithms and present a new framework in order to tackle the problem. First we use Aspnes’ framework composed of adopt-commit and conciliator objects in order to decompose the well known Phase-King Byzantine algorithm. We then consider two other well-known algorithms and argue that this framework is insufficient in these (and other) cases and offer a new framework. The framework works in rounds where each consists of two steps. The first step involves an object which detects agreement and the second involves an object that aims at achieving consensus. We denote our newly defined objects as vacillate-adopt-commit and reconciliator. We demonstrate our decomposition on two well known algorithms. Namely, Ben-Or’s Randomized algorithm and the Raft algorithm.
@inproceedings{AfekACV2017, author = {Yehuda Afek and James Aspnes and Edo Cohen and Danny Vainstein}, editor = {Elad Michael Schiller and Alexander A. Schwarzmann}, title = {Brief Announcement: Object Oriented Consensus}, booktitle = {Proceedings of the {ACM} Symposium on Principles of Distributed Computing, {PODC} 2017, Washington, DC, USA, July 25-27, 2017}, pages = {367--369}, publisher = {{ACM}}, year = {2017}, url = {http://doi.acm.org/10.1145/3087801.3087867}, doi = {10.1145/3087801.3087867}, timestamp = {Fri, 21 Jul 2017 13:32:16 +0200}, biburl = {http://dblp.dagstuhl.de/rec/bib/conf/podc/AfekACV17}, bibsource = {dblp computer science bibliography, http://dblp.org} }
Dan Alistarh, James Aspnes, Faith Ellen, Rati Gelashvili, and Leqi Zhu. Why extension-based proofs fail. In preparation.
We prove that a class of fundamental shared memory tasks are not amenable to certain standard proof techniques in the field. We define a class of extension-based proofs, which contains impossibility arguments like FLP and virtually all known space lower bounds. We then show that extension-based proofs are limited in power: for instance, they cannot establish the impossibility of solving (n-1)-set agreement among n ≥ 3 processes in a wait-free manner. This problem does have a proof based on combinatorial topology. Although there are proofs that do not explicitly rely on topological representation, it was unknown whether proofs based on simpler techniques were possible. To prove limitations on extension-based proofs, we use an adversary argument between a prover and an adversarial algorithm.
We define a restricted class of algorithms, which satisfy a boundary condition. For this class of algorithms, we present a simple extension-based proof of the impossibility of (n-1)-set agreement among n processes. In addition, we give a simple extension-based proof of a tight lower bound for the expected step complexity of randomized test-and-set algorithms that satisfy a similar boundary condition. This implies that faster algorithms, if they exist, must violate the boundary condition.
@unpublished{AlistarhAEGZ2018, author = {Alistarh, Dan and Aspnes, James and Ellen, Faith and Gelashvili, Rati and Zhu, Leqi}, title = {Why extension-based proofs fail}, month = feb, year = 2018, note={Unpublished manuscript} }
Christopher Leet, Xin Wang, Y. Richard Yang, and James Aspnes. Toward the first SDN programming capacity theorem on realizing high-level programs on low-level datapaths. IEEE INFOCOM 2018 - IEEE Conference on Computer Communications, April 2018.
High-level programming and programmable data paths are two key capabilities of software-defined networking (SDN). A fundamental problem linking these two capabilities is whether a given high-level SDN program can be realized onto a given low-level SDN datapath structure. Considering all highlevel programs that can be realized onto a given datapath as the programming capacity of the datapath, we refer to this problem as the SDN datapath programming capacity problem. In this paper, we conduct the first study on the SDN datapath programming capacity problem, in the general setting of high-level, datapath oblivious, algorithmic SDN programs and state-of-art multi-table SDN datapath pipelines. In particular, considering datapath-oblivious SDN programs as computations and datapath pipelines as computation capabilities, we introduce a novel framework called SDN characterization functions, to map both SDN programs and datapaths into a unifying space, deriving the first rigorous result on SDN datapath programming capacity. We not only prove our results but also conduct realistic evaluations to demonstrate the tightness of our analysis.
@inproceedings{LeetWYA2018, author = {Leet, Christopher and Wang, Xin and Yang, Y. Richard and Aspnes, James}, title = {Toward the first {SDN} programming capacity theorem on realizing high-level programs on low-level datapaths}, month = apr, year = 2018, booktitle = {IEEE INFOCOM 2018 - IEEE Conference on Computer Communications}, pages = {1--9} }
Dan Alistarh, James Aspnes, and Rati Gelashvili. Space-optimal majority in population protocols. Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2018), January 2018, pp. 2221–2239.
Population protocols are a popular model of distributed computing, in which n agents with limited local state interact randomly, and cooperate to collectively compute global predicates. Inspired by recent developments in DNA programming, an extensive series of papers, across different communities, has examined the computability and complexity characteristics of this model. Majority, or consensus, is a central task in this model, in which agents need to collectively reach a decision as to which one of two initial states A or B had higher initial count. Two complexity metrics are important: the time that a protocol requires to stabilize to a stable output decision, and the state space size that each agent requires to do so.
It is currently known that majority requires Ω(log log n) states per agent to allow for fast (polylogarithmic time) stabilization, and that O(log² n) states are sufficient. Thus, there is an exponential gap between the upper and lower bounds for this problem.
We address this question. On the negative side, we provide a new lower bound of Ω(log n) states for any protocol which converges to a stable output in O(n c) time, for any c ≤ 1 constant. This result is conditional on basic monotonicity and output assumptions, satisfied by all known protocols. Technically, it represents a significant departure from previous lower bounds, in that it does not rely on the existence of dense configurations. Instead, we introduce a new generalized surgery technique to prove the existence of incorrect executions for any algorithm which would contradict the lower bound.
On the positive side, we give a new algorithm for majority which uses O(log n) states, and converges in O(log² n) time. Central to the algorithm is a new leaderless phase clock technique, which allows nodes to synchronize in phases of Θ(n log n) consecutive interactions using O(log n) states per node, exploiting a new connection between population protocols and power-of-two-choices load balancing mechanisms. Besides logarithmic-state majority, we employ the phase clock to build a new leader election algorithm with a state space of size O(log n), which stabilizes in O(log² n) expected time.
@inproceedings{AlistarhAG2018, author = {Dan Alistarh and James Aspnes and Rati Gelashvili}, editor = {Artur Czumaj}, title = {Space-Optimal Majority in Population Protocols}, booktitle = {Proceedings of the Twenty-Ninth Annual {ACM-SIAM} Symposium on Discrete Algorithms, {SODA} 2018, New Orleans, LA, USA, January 7-10, 2018}, pages = {2221--2239}, publisher = {{SIAM}}, year = {2018}, url = {https://doi.org/10.1137/1.9781611975031.144}, doi = {10.1137/1.9781611975031.144}, timestamp = {Thu, 04 Jan 2018 13:55:11 +0100}, biburl = {http://dblp.org/rec/bib/conf/soda/AlistarhAG18}, bibsource = {dblp computer science bibliography, http://dblp.org} }
James Aspnes. Clocked population protocols. ACM Symposium on Principles of Distributed Computing (PODC 2017), July 2017, pp. 431–440.
Population protocols are required to converge to the correct answer, and are subject to a fairness condition that guarantees eventual progress, but generally have no internal mechanism for detecting when this progress has occurred. We define an extension to the standard population protocol that provides each agent with a clock signal that indicates when the agent has waited long enough. To simplify the model, we represent “long enough” as an infinite time interval, and treat a clocked population protocol as operating over transfinite time. This gives a clean theoretical model that we show how to translate back into finite real-world executions where the clock ticks whenever the underlying protocol is looping or stuck.
Over finite time intervals, the protocol behaves as in the standard model. At nonzero limit ordinals ω, ω⋅2, etc., corresponding to clock ticks, the protocol switches to a limit of previous configurations supplemented by an signal registering in an extra component in some of the agents' states. Using transfinite times means that we can represent fairness over sequences of transitions that may include clock ticks with the same definition as over smaller intervals. Using arbitrary ordinals allows using times like ω² or ω³ to represent convergence that depends on detecting convergence repeatedly at lower levels.
We show that a clocked population protocol running in ω^{k} time is equivalent in power to a nondeterministic Turing machine with space complexity logarithmic in the size of the population. A consequence of this equivalence is that any symmetric predicate computed by such a protocol can be computed in less than $ω² time, which requires only finitely many clock ticks.
@inproceedings{Aspnes17, author = {James Aspnes}, editor = {Elad Michael Schiller and Alexander A. Schwarzmann}, title = {Clocked Population Protocols}, booktitle = {Proceedings of the {ACM} Symposium on Principles of Distributed Computing, {PODC} 2017, Washington, DC, USA, July 25-27, 2017}, pages = {431--440}, publisher = {{ACM}}, year = {2017}, url = {http://doi.acm.org/10.1145/3087801.3087836}, doi = {10.1145/3087801.3087836}, timestamp = {Fri, 21 Jul 2017 13:32:16 +0200}, biburl = {http://dblp.dagstuhl.de/rec/bib/conf/podc/Aspnes17}, bibsource = {dblp computer science bibliography, http://dblp.org} }
Dan Alistarh, James Aspnes, David Eisenstat, Rati Gelashvili, and Ronald Rivest. Time-space trade-offs in population protocols. Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2017, pp. 2560–2579.
Population protocols are a popular model of distributed computing, in which randomly-interacting agents with little computational power cooperate to jointly perform computational tasks. Inspired by developments in molecular computation, and in particular DNA computing, recent algorithmic work has focused on the complexity of solving simple yet fundamental tasks in the population model, such as leader election (which requires convergence to a single agent in a special “leader” state), and majority (in which agents must converge to a decision as to which of two possible initial states had higher initial count). Known results point towards an inherent trade-off between the time complexity of such algorithms, and the space complexity, i.e. size of the memory available to each agent.
In this paper, we explore this trade-off and provide new upper and lower bounds for majority and leader election. First, we prove a unified lower bound, which relates the space available per node with the time complexity achievable by a protocol: for instance, our result implies that any protocol solving either of these tasks for n agents using O(log log n) states must take Ω(n / polylog n) expected time. This is the first result to characterize time complexity for protocols which employ super-constant number of states per node, and proves that fast, poly-logarithmic running times require protocols to have relatively large space costs.
On the positive side, we give algorithms showing that fast, poly-logarithmic convergence time can be achieved using O(log² n) space per node, in the case of both tasks. Overall, our results highlight a time complexity separation between O(log log n) and Θ(log² n) state space size for both majority and leader election in population protocols, and introduce new techniques, which should be applicable more broadly.
@inproceedings{AlistarhAEGR2017, author = {Dan Alistarh and James Aspnes and David Eisenstat and Rati Gelashvili and Ronald Rivest}, title = {Time-space trade-offs in population protocols}, mon = jan, year = {2017}, booktitle={Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms}, pages={2560--2579} }
James Aspnes, Joffroy Beauquier, Janna Burman, and Devan Sohier. Time and space optimal counting in population protocols. 20th International Conference on Principles of Distributed Systems, OPODIS 2016, December 13–16, 2016, Madrid, Spain, December 2016, pp. 13:1–13:17.
This work concerns the general issue of combined optimality in terms of time and space complexity. In this context, we study the problem of counting resource-limited and passively mobile nodes in the model of population protocols, in which the space complexity is crucial. The counted nodes are memory-limited anonymous devices (called agents) communicating asynchronously in pairs (according to a fairness condition). Moreover, we assume that these agents are prone to failures so that they cannot be correctly initialized.
This study considers two classical fairness conditions, and for each we investigate the issue of time optimality of (exact) counting given optimal space. First, with randomly interacting agents (probabilistic fairness), we present a ``non-guessing'' time optimal protocol of O(n log n) expected interactions given an optimal space of only one bit (for a population of size n). We prove the time optimality of such protocol.
Then, under weak fairness (where every pair of agents interacts infinitely often), we show that a space optimal (semi-uniform) solution cannot converge faster than in Ω(2^{n}) time, in terms of non-null transitions (i.e, the transitions that affect the states of the interacting agents). This result together with the time complexity analysis of an already known space optimal protocol shows that it is also optimal in time (given the optimal space constraints).
@inproceedings{AspnesBBS2016, author = {James Aspnes and Joffroy Beauquier and Janna Burman and Devan Sohier}, title = {{Time and Space Optimal Counting in Population Protocols}}, booktitle = {20th International Conference on Principles of Distributed Systems (OPODIS 2016)}, pages = {13:1--13:17}, series = {Leibniz International Proceedings in Informatics (LIPIcs)}, ISBN = {978-3-95977-031-6}, ISSN = {1868-8969}, year = {2017}, volume = {70}, editor = {Panagiota Fatourou and Ernesto Jim{\'e}nez and Fernando Pedone}, publisher = {Schloss Dagstuhl--Leibniz-Zentrum fuer Informatik}, address = {Dagstuhl, Germany}, URL = {http://drops.dagstuhl.de/opus/volltexte/2017/7082}, URN = {urn:nbn:de:0030-drops-70828}, doi = {10.4230/LIPIcs.OPODIS.2016.13}, annote = {Keywords: networks of passively mobile agents/sensors, population protocols, counting, anonymous non-initialized agents, time and space complexity, lower bounds} }
James Aspnes and Eric Ruppert. Depth of a random binary search tree with concurrent insertions. In Distributed Computing - 30th International Symposium, {DISC} 2016, Paris, France, September 27–29, 2016. Proceedings, September 2016, pp. 371–384.
Shuffle a deck of $n$ cards numbered $1$ through $n$. Deal out the first c cards into a hand. A player then repeatedly chooses one of the cards from the hand, inserts it into a binary search tree, and then adds the next card from deck to the hand (if the deck is empty). When the player finally runs out of cards, how deep can the search tree be?
This problem is motivated by concurrent insertions by $c$ processes of random keys into a binary search tree, where the order of insertions is controlled by an adversary that can delay individual processes. We show that an adversary that uses any strategy based on comparing keys cannot obtain an expected average depth greater than O(c + log n). However, the adversary can obtain an expected tree height of Ω(c log (n/c)), using a simple strategy of always playing the largest available card.
@inproceedings{AspnesR2016, author = {James Aspnes and Eric Ruppert}, editor = {Cyril Gavoille and David Ilcinkas}, title = {Depth of a Random Binary Search Tree with Concurrent Insertions}, booktitle = {Distributed Computing - 30th International Symposium, {DISC} 2016, Paris, France, September 27--29, 2016. Proceedings}, series = {Lecture Notes in Computer Science}, volume = {9888}, pages = {371--384}, publisher = {Springer}, year = {2016}, url = {http://dx.doi.org/10.1007/978-3-662-53426-7_27}, doi = {10.1007/978-3-662-53426-7_27}, timestamp = {Mon, 05 Sep 2016 12:49:45 +0200}, biburl = {http://dblp.dagstuhl.de/rec/bib/conf/wdag/AspnesR16}, bibsource = {dblp computer science bibliography, http://dblp.org} }
Dana Angluin, James Aspnes, and Dongqu Chen. A population protocol for binary signaling consensus. Available as YALEU/DCS/TR-1527, August 2016.
We study the convergence properties of a simple population protocol for consensus using binary signaling, where the communication in each interaction is limited to a single bit transmitted from the initiator to the responder. We consider a population which consists of n agents, where pairs of individuals are drawn uniformly at random to interact. Each agent has a confidence level for a binary preference and a more confident agent supports the preference with higher probability. An agent increases its confidence level when interacting with another agent supporting the preference, and decreases its confidence level otherwise. We prove that with high probability a three-state binary signaling population protocol reaches consensus after Θ(n log n) interactions in the worst case, regardless of the initial configuration. In the general case, a continuous-time binary signaling process in the limit will converge within O(r log nr) time (corresponding to O(nr log nr) interactions in expectation) if the initial configuration is monotone, where r is the number of confidence levels. In the other direction, we also show a convergence lower bound Ω(nr+ n log n) on the number of interactions when r is large. Experimental results are presented to support our theoretical results and to provide evidence for some conjectures.
@techreport{AngluinAC2016, author = {Dana Angluin and James Aspnes and Dongqu Chen}, title = {A population protocol for binary signaling consensus}, mon = aug, year = {2016}, institution="Yale University Department of Computer Science", number="YALEU/DCS/TR-1527" }
James Aspnes, Keren Censor-Hillel, and Eitan Yaakobi. Concurrent use of write-once memory. Journal of Parallel and Distributed Computing 113:250–260, March 2018. An earlier version appeared in Structural Information and Communication Complexity - 23rd International Colloquium, SIROCCO 2016, Helsinki, Finland, July 19–21, 2016, Revised Selected Papers, July 2016, pp. 127–142.
We consider the problem of implementing general shared-memory objects on top of write-once bits, which can be changed from 0 to 1 but not back again. In a sequential setting, write-once memory (WOM) codes have been developed that allow simulating memory that support multiple writes, even of large values, setting an average of 1+o(1) write-once bits per write. We show that similar space efficiencies can be obtained in a concurrent setting, though at the cost of high time complexity and fixed bound on the number of write operations. As an alternative, we give an implementation that permits unboundedly many writes and has much better amortized time complexity, but at the cost of unbounded space complexity. Whether one can obtain both low time complexity and low space complexity in the same implementation remains open.
@article{AspnesCY2018, author = {James Aspnes and Keren Censor{-}Hillel and Eitan Yaakobi}, title = {Concurrent use of write-once memory}, journal = {Journal of Parallel and Distributed Computing}, volume = {113}, pages = {250--260}, month = mar, year = {2018}, url = {https://doi.org/10.1016/j.jpdc.2017.12.001}, doi = {10.1016/j.jpdc.2017.12.001}, timestamp = {Tue, 30 Jan 2018 14:44:37 +0100}, biburl = {https://dblp.org/rec/bib/journals/jpdc/AspnesCY18}, bibsource = {dblp computer science bibliography, https://dblp.org} }
Yves Mocquard, Emmanuelle Anceaume, James Aspnes, Yann Busnel, and Bruno Sericola. Counting with population protocols. 2015 IEEE 14th International Symposium on Network Computing and Applications, September 2015, pp. 35–42.
The population protocol model provides theoretical foundations for analyzing the properties emerging from simple and pairwise interactions among a very large number n of anonymous agents. The problem tackled in this paper is the following one: is there an efficient population protocol that exactly counts the difference κ between the number of agents that initially and independently set their state to A and the one that initially set it to B, assuming that each agent only uses a finite set of states? We propose a solution which guarantees with any high probability that after O(log n) interactions any agent outputs the exact value of κ. Simulation results illustrate our theoretical analysis.
@inproceedings{MocquardAABS2015, author = {Yves Mocquard and Emmanuelle Anceaume and James Aspnes and Yann Busnel and Bruno Sericola}, title = {Counting with population protocols}, booktitle = {2015 IEEE 14th International Symposium on Network Computing and Applications}, mon = sep, year = {2015}, pages = {35--42} }
Dana Angluin, James Aspnes, and Lev Reyzin. Network construction with subgraph connectivity constraints. Journal of Combinatorial Optimization, 29(2):418–432, February 2015.
We consider the problem introduced by Korach and Stern (2008) of building a network given connectivity constraints. A network designer is given a set of vertices V and constraints S_{i} ⊆ V, and seeks to build the lowest cost set of edges E such that each S_{i} induces a connected subgraph of (V,E). First, we answer a question posed by Korach and Stern: for the offline version of the problem, we prove an Ω(log(n)) hardness of approximation result for uniform cost networks (where edge costs are all 1) and give an algorithm that almost matches this bound, even in the arbitrary cost case. Then we consider the online problem, where the constraints must be satisfied as they arrive. We give an O(n log(n))-competitive algorithm for the arbitrary cost online problem, which has an Ω(n)-competitive lower bound. We look at the uniform cost case as well and give an O(n^{2/3}log^{2/3}(n))-competitive algorithm against an oblivious adversary, as well as an Ω(√n)-competitive lower bound against an adaptive adversary. We also examine cases when the underlying network graph is known to be a star or a path, and prove matching upper and lower bounds of Θ(log(n)) on the competitive ratio for them.
@article{AngluinAR2015, author = {Dana Angluin and James Aspnes and Lev Reyzin}, title = {Network construction with subgraph connectivity constraints}, month=feb, year = 2015, journal="Journal of Combinatorial Optimization", volume = 29, number = 2 }
Dan Alistarh, James Aspnes, Valerie King, and Jared Saia. Communication-efficient randomized consensus. To appear, Distributed Computing. An earlier version appeared in Distributed Computing — 28th International Symposium, DISC 2014, Austin, TX, USA, October 12–15, 2014. Proceedings, Lecture Notes in Computer Science 8784, Springer, October 2014, pp. 61–75.
We consider the problem of consensus in the challenging classic model. In this model, the adversary is adaptive; it can choose which processors crash at any point during the course of the algorithm. Further, communication is via asynchronous message passing: there is no known upper bound on the time to send a message from one processor to another, and all messages and coin flips are seen by the adversary.
We describe a new randomized consensus protocol with expected message complexity O(n² log² n) when fewer than n/2 processes may fail by crashing. This is an almost-linear improvement over the best previously known protocol, and within logarithmic factors of a known Ω(n²) message lower bound. The protocol further ensures that no process sends more than O(n log³ n) messages in expectation, which is again within logarithmic factors of optimal. We also present a generalization of the algorithm to an arbitrary number of failures t, which uses expected O(nt + t² log² t) total messages.
Our approach is to build a message-efficient, resilient mechanism for aggregating individual processor votes, implementing a message-passing equivalent of a weak shared coin. Roughly, in our protocol, a processor first announces its votes to small groups, then propagates them to increasingly larger groups as it generates more and more votes. To bound the number of messages that an individual process might have to send or receive, the protocol progressively increases the weight of generated votes. The main technical challenge is bounding the impact of votes that are still “in flight” (generated, but not fully propagated) on the final outcome of the shared coin, especially since such votes might have different weights. We achieve this by leveraging the structure of the algorithm, and a technical argument based on martingale concentration bounds. Overall, we show that it is possible to build an efficient message-passing implementation of a shared coin, and in the process (almost-optimally) solve the classic consensus problem in the asynchronous message-passing model.
@inproceedings{AlistarhAKS2014, author = {Dan Alistarh and James Aspnes and Valerie King and Jared Saia}, title = {Communication-Efficient Randomized Consensus}, mon = oct, pages = {61--75}, editor = {Fabian Kuhn}, booktitle = {Distributed Computing - 28th International Symposium, {DISC} 2014, Austin, TX, USA, October 12--15, 2014. Proceedings}, series = {Lecture Notes in Computer Science}, year = {2014}, volume = {8784}, publisher = {Springer}, }
Dan Alistarh, James Aspnes, Michael Bender, Rati Gelashvili, and Seth Gilbert. Dynamic task allocation in asynchronous shared memory. 2014 ACM-SIAM Symposium on Discrete Algorithms, January 2014, pp. 416–435.
Task allocation is a classic distributed problem in which a set of p potentially faulty processes must cooperate to perform a set of m tasks. This paper considers a new dynamic version of the problem, in which tasks are injected adversarially during an asynchronous execution. A major challenge in this setting is the fact that, at the same time, the adaptive adversary controls the scheduling and process crashes, as well as choosing the input. We give the first asynchronous shared-memory algorithm for dynamic task allocation, and we prove that our solution is optimal within logarithmic factors. The main algorithmic idea is a randomized data structure called a dynamic to-do tree, which allows processes to pick new tasks to perform at random from the set of available tasks, and to insert tasks at random available locations in the data structure. Our analysis shows that that these properties avoid duplicating work unnecessarily. On the other hand, since the adversary controls the input as well the scheduling, it can induce executions where lots of processes contend for a few available tasks, which is inefficient. However, we prove that every algorithm has the same problem: given an arbitrary input, if OPT is the worst-case complexity of the optimal algorithm on that input, then the expected complexity of our algorithm on the same input is O(OPT log³m).
@inproceedings{AlistarhABGG2014, author = {Dan Alistarh and James Aspnes and Michael Bender and Rati Gelashvili and Seth Gilbert}, title = {Dynamic task allocation in asynchronous shared memory}, month=jan, year = 2014, booktitle={2014 ACM-SIAM Symposium on Discrete Algorithms}, pages={416--435} }
James Aspnes and Keren Censor-Hillel. Atomic snapshots in expected O(log³ n) steps using randomized helping. Submitted to Distributed Computing, January 2014. Last revised February 2017. An earlier version appeared in Distributed Computing: 27th International Symposium, DISC 2013, Jerusalem, Israel, October 14--18, 2013. Proceedings, Lecture Notes in Computer Science 8205, Springer-Verlag, October 2013, pp. 254–268. There is an important erratum for the proceeding version of this paper.
A randomized construction of single-writer snapshot objects from atomic registers is given. The cost of each snapshot operation is O(log³n) atomic register steps with high probability, where n is the number of processes, even against an adaptive adversary. This is an exponential improvement on the linear cost of the previous best known unrestricted snapshot construction and on the linear lower bound for deterministic constructions, and does not require limiting the number of updates as in previous sublinear constructions. One of the main ingredients in the construction is a novel randomized helping technique that allows out-of-date processes to obtain up-to-date information.
Our construction can be adapted to implement snapshots in a message-passing system. While a direct adaptation using the Attiya-Bar-Noy-Dolev construction gives a cost of O(log³ n) time and O(n log³ n) messages per operation with high probability, we show that exploiting the inherent parallelism of a message-passing system can eliminate the need for randomized helping and reduce the complexity to O(log² n) time and O(n log² n) messages per operation in the worst case. This implementation includes an O(1)-time, O(n)-message construction of an unbounded max register that may be of independent interest.
@incollection{AspnesC2013, year={2013}, isbn={978-3-642-41526-5}, booktitle={Distributed Computing: 27th International Symposium, DISC 2013, Jerusalem, Israel, October 14--18, 2013. Proceedings}, volume={8205}, series={Lecture Notes in Computer Science}, editor={Afek, Yehuda}, doi={10.1007/978-3-642-41527-2_18}, title={Atomic Snapshots in ${O}(\log^3 n)$ Steps Using Randomized Helping}, url={http://dx.doi.org/10.1007/978-3-642-41527-2_18}, publisher={Springer Berlin Heidelberg}, author={Aspnes, James and Censor-Hillel, Keren}, pages={254--268} }
Dan Alistarh, James Aspnes, George Giakkoupis, and Philipp Woelfel. Randomized loose renaming in O(log log n) time. 2013 ACM Symposium on Principles of Distributed Computing, July 2013, pp. 200–209.
Renaming is a classic distributed coordination task in which a set of processes must pick distinct identifiers from a small namespace. In this paper, we consider the time complexity of this problem when the namespace is linear in the number of participants, a variant known as loose renaming. We give a non-adaptive algorithm with O(log log n) (individual) step complexity, where n is a known upper bound on contention, and an adaptive algorithm with step complexity O((log log k)²), where k is the actual contention in the execution. Both bounds hold with high probability against a strong adaptive adversary. The running time improvement over previously known solutions is exponential.
We complement the algorithms with an Ω(log log n) expected time lower bound on the complexity of randomized renaming using test-and-set operations and linear space. The result is based on a new coupling technique, and is the first to apply to non-adaptive randomized renaming. Since our algorithms use O(n) test-and-set objects, our results provide matching bounds on the cost of loose renaming in this setting.
@inproceedings{AlistarhAGW2013, author = {Dan Alistarh and James Aspnes and George Giakkoupis and Philipp Woelfel}, title = {Randomized loose renaming in $O(\log \log n)$ time}, month=jul, year = 2013, booktitle={2013 ACM Symposium on Principles of Distributed Computing}, pages={200--209} }
Dana Angluin, James Aspnes, Sarah Eisenstat, and Aryeh Kontorovich. On the learnability of shuffle ideals. Journal of Machine Learning Research, 14(Jun):1513–1531, June 2013. An earlier version appeared in Twenty-Third International Converence on Algorithmic Learning Theory, October 2012, pp. 111–123.
PAC learning unrestricted regular languages is long known to be a very difficult problem. The class of shuffle ideals is a very restricted subclass of regular languages, where the shuffle ideal generated by a string u is the collection of all strings containing u as a subsequence. This fundamental language family is of theoretical interest in its own right and provides the building blocks for other important language families. Despite its apparent simplicity, the class of shuffle ideals appears quite difficult to learn. In particular, just as for unrestricted regular languages, the class is not properly PAC learnable in polynomial time if RP ≠ NP, and PAC learning the class improperly in polynomial time would imply polynomial time algorithms for certain fundamental problems in cryptography. In the positive direction, we give an efficient algorithm for properly learning shuffle ideas in the statistical query (and therefor also PAC) model under the uniform distribution.
@article{AngluinAEK2013, author = {Dana Angluin and James Aspnes and Sarah Eisenstat and Aryeh Kontorovich}, title = {On the Learnability of Shuffle Ideals}, journal = {Journal of Machine Learning Research}, month = jun, year = {2013}, volume = {14}, number = {Jun}, pages = {1513--1531}, url = {http://jmlr.org/papers/v14/angluin13a.html} }
James Aspnes. A one-bit swap object using test-and-sets and a max register. Available as YALEU/DCS/TR-1464, October 2012.
We describe a linearizable, wait-free implementation of a one-bit swap object from a single max register and an unbounded array of test-and-set bits. Each swap operation takes at most three steps. Using standard randomized constructions, the max register and test-and-set bits can be replaced by read-write registers, at the price of raising the cost of a swap operation to an expected O(max(log n, min(log t, n))) steps, where t is the number of times the swap object has previously changed its value and n is the number of processes.
@techreport{Aspnes2012swap, author = {James Aspnes}, title = {A one-bit swap using test-and-sets and a max register}, institution="Yale University Department of Computer Science", number="YALEU/DCS/TR-1464", month=oct, year = 2012 }
Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Rachid Guerraoui. Tight bounds for asynchronous renaming. Journal of the Association for Computing Machinery, 61(3):18, May 2014.
This paper presents the first tight bounds on the complexity of shared-memory renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct identifiers from a small namespace.
We first prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. The bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic concurrent fetch-and-increment counters, queues and stacks. The proof is based on a new reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement this individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This result applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter implementations, that are tight within logarithmic factors.
On the algorithmic side, we give a protocol that transforms any sorting network into a strong adaptive renaming algorithm, with expected cost equal to the depth of the sorting network. This gives a tight adaptive renaming algorithm with expected step complexity O(log k), where k is the contention in the current execution. This algorithm is the first to achieve sublinear time, and it is time-optimal as per our randomized lower bound. Finally, we use this renaming protocol to build monotone-consistent counters with logarithmic step complexity and linearizable fetch-and-increment registers with polylogarithmic cost.
@article{AlistarhACHGG2014, author = {Dan Alistarh and James Aspnes and Keren Censor-Hillel and Seth Gilbert and Rachid Guerraoui}, title = {Tight bounds for asynchronous renaming}, month=may, year = 2014, journal = jacm, volume=61, number=3, pages=18 }
James Aspnes. Faster randomized consensus with an oblivious adversary. Distributed Computing 28(1):21–29, February 2015. (PODC 2012 special issue). An earlier version appeard in 2012 ACM Symposium on Principles of Distributed Computing, July 2012, pp. 1–8.
Two new algorithms are given for randomized consensus in a shared-memory model with an oblivious adversary. Each is based on a new construction of a conciliator, an object that guarantees termination and validity, but that only guarantees agreement with constant probability. The first conciliator assumes unit-cost snapshots and achieves agreement among n processes with probability 1-ε in O(log^{*} n + log(1/ε)) steps for each process. The second uses ordinary multi-writer registers, and achieves agreement with probability 1-ε in O(log log n + log(1/ε)) steps. Combining these constructions with known results gives randomized consensus for arbitrarily many possible input values using unit-cost snapshots in O(log* n) expected steps and randomized consensus for up to (log n)^{O(log log log n)} possible input values using ordinary registers in O(log log n) expected steps.
@article{Aspnes2015, author = {James Aspnes}, title = {Faster randomized consensus with an oblivious adversary}, month = feb, year = 2015, journal = {Distributed Computing}, volume = 28, number = 1, pages={21--29} }
James Aspnes, Hagit Attiya, Keren Censor-Hillel, and Faith Ellen. Limited-use atomic snapshots with polylogarithmic step complexity. Journal of the Association for Computing Machinery 62(1):3, February 2015. Erratum. An earlier version appeared in 2012 ACM Symposium on Principles of Distributed Computing, July 2012, pp. 375–384, under the title “Faster than optimal snapshots (for a while)” .
This paper presents a novel implementation of a snapshot object for n processes, with O(log² b log n) step complexity for update operations and O(log b) step complexity for scan operations, where b is the number of updates. The algorithm uses only reads and writes.
For polynomially many updates, this is an exponential improvement on previous snapshot algorithms, which have linear step complexity. It overcomes the existing Ω(n) lower bound on step complexity by having the step complexity depend on the number of updates. The key to this implementation is the construction of a new object consisting of a pair of max registers that supports a scan operation.
@article{AspnesACHE2015, author = {James Aspnes and Hagit Attiya and Keren Censor-Hillel and Faith Ellen}, title = {Limited-use snapshots with polylogarithmic step complexity}, month = feb, year = 2015, journal = jacm, volume = 62, number = 1, pages = 3 } @unpublished{AspnesACHE2015erratum, author = {James Aspnes and Hagit Attiya and Keren Censor-Hillel and Faith Ellen}, title = {Erratum: Limited-use snapshots with polylogarithmic step complexity}, month = apr, year = 2017, note={To appear, JACM. Available at \texttt{http://www.cs.yale.edu/homes/aspnes/papers/limited-use-snapshots-abstract.html}.} }
James Aspnes, Hagit Attiya, Keren Censor-Hillel, and Danny Hendler. Lower bounds for restricted-use objects. SIAM Journal on Computing 45(3):734–762, 2016. An earlier version appeared in Twenty-Fourth ACM Symposium on Parallel Algorithms and Architectures, July 2012, pp. 172–181.
Concurrent objects play a key role in the design of applications for multi-core architectures, making it imperative to precisely understand their complexity requirements. For some objects, it is known that implementations can be significantly more efficient when their usage is restricted. However, apart from the specific restriction of one-shot implementations, where each process may apply only a single operation to the object, very little is known about the complexities of objects under general restrictions.
This paper draws a more complete picture by defining a large class of objects for which an operation applied to the object can be ``perturbed'' L consecutive times, and by proving lower bounds on their space complexity and on the time complexity of deterministic implementations of such objects. This class includes bounded-value max registers, limited-use approximate and exact counters, and limited-use collect and compare-and-swap objects; L depends on the number of times the object can be accessed or the maximum value it can support.
For n-process implementations that use only historyless primitives, we prove Ω(min(L,n)) space complexity lower bounds, which hold for both deterministic and randomized implementations. For deterministic implementations, we prove lower bounds of Ω(min(log L, n)) on the worst-case step complexity of an operation. When arbitrary primitives can be used, we prove that either some operation incurs Ω(min(log L, n)) memory stalls or some operation performs Ω(min(log L, n)) steps.
In addition to our deterministic time lower bounds, the paper establishes lower bounds on the expected step complexity of restricted-use randomized versions of many of these objects in a weak oblivious adversary model.
@article{AspnesCAH2016, author = {James Aspnes and Keren Censor{-}Hillel and Hagit Attiya and Danny Hendler}, title = {Lower Bounds for Restricted-Use Objects}, journal = {{SIAM} J. Comput.}, volume = {45}, number = {3}, pages = {734--762}, year = {2016}, url = {http://dx.doi.org/10.1137/130905022}, doi = {10.1137/130905022}, timestamp = {Thu, 21 Jul 2016 11:25:37 +0200}, biburl = {http://dblp.dagstuhl.de/rec/bib/journals/siamcomp/AspnesCAH16}, bibsource = {dblp computer science bibliography, http://dblp.org} }
James Aspnes and Yitong Yin. Randomized load balancing by joining and splitting bins. Information Processing Letters, 112(8–9):309–313, April 2012.
Consider the following load balancing scenario: a certain amount of work load is distributed among a set of machines that may change over time as machines join and leave the system. Upon an arrival of a new machine, one of the existing machines gives some of its load to the new machine; and upon a departure of a machine, it gives all its load away to one of the existing machines in the system. Such load balancing schemes can be modeled as a simple game of joining and splitting weighted bins. Each bin corresponds to a machine in the system, and the weight of the bin represents the amount of load assigned to the machine. The arrival of a new machine corresponds to a split of a bin, and the departure of an existing machine is represented by joining two bins.
We consider what happens when the joins and splits are randomized. When the bins are split with probability proportional to their weights, it is not hard to see that this gives the same behavior as uniformly cutting a ring as in (Karger et al., 1997), which yields an O(log n) load factor for n bins. Where it is infeasible to bias the random choice with the weights, it is natural to implement uniform splits, where the split bin is sampled uniformly. Despite its simple definition, analyzing the performance of this natural load-distribution mechanism is a nontrivial task.
In this paper, we apply a novel technique based on vector norms to analyze the load balancing performance of uniform random joins and splits. We show that if only splits (with no joins) are applied, the expected load factor, the ratio between the maximum weight and the average weight of the bins, is between Ω(n^{0.5}) and O(n^{0.742}). We then study the performance of mixed joins and splits, and show that the expected load factor approaches O(n^{1/√((1/2) lg n)}) after alternatively applying sufficiently many joins and splits to an arbitrary initial load assignment of n bins. These results demonstrate that the good load factor obtained by (Karger et al., 1997) depends strongly on the ability to preferentially split heavily-loaded bins.
@article{AspnesY2012, author = {James Aspnes and Yitong Yin}, title = {Randomized load balancing by joining and splitting bins}, month=apr, year = 2012, journal="Information Processing Letters", volume=112, number={8--9}, pages={309--313} }
Dan Alistarh, James Aspnes, Seth Gilbert, and Rachid Guerraoui. The complexity of renaming. Fifty-Second Annual IEEE Symposium on Foundations of Computer Science, October 2011, pp. 718–727.
We study the complexity of renaming, a fundamental problem in distributed computing in which a set of processes need to pick distinct names from a given namespace. We prove an individual lower bound of Ω(k) process steps for deterministic renaming into any namespace of size sub-exponential in k, where k is the number of participants. This bound is tight: it draws an exponential separation between deterministic and randomized solutions, and implies new tight bounds for deterministic fetch-and-increment registers, queues and stacks. The proof of the bound is interesting in its own right, for it relies on the first reduction from renaming to another fundamental problem in distributed computing: mutual exclusion. We complement our individual bound with a global lower bound of Ω(k log(k/c)) on the total step complexity of renaming into a namespace of size ck, for any c ≥ 1. This applies to randomized algorithms against a strong adversary, and helps derive new global lower bounds for randomized approximate counter and fetch-and-increment implementations, all tight within logarithmic factors.
@inproceedings{AlistarhAGG2011, author = {Dan Alistarh and James Aspnes and Seth Gilbert and Rachid Guerraoui}, title = {The complexity of renaming}, month = oct, year = 2011, booktitle={Fifty-Second Annual IEEE Symposium on Foundations of Computer Science}, pages={718--727} }
James Aspnes. Randomized consensus in expected O(n²) total work using single-writer registers. Distributed Computing: 25th International Symposium, DISC 2011. Lecture Notes in Computer Science 6950, Springer-Verlag, September 2011, pp. 363–373.
A new weak shared coin protocol yields a randomized wait-free shared-memory consensus protocol that uses an optimal O(n²) expected total work with single-writer registers despite asynchrony and process crashes. Previously, no protocol was known that achieved this bound without using multi-writer registers.
@inproceedings{Aspnes2011, author = {James Aspnes}, title = {Randomized consensus in expected {$O(n^2)$} total work using single-writer registers}, month =sep, year = 2011, booktitle="Distributed Computing: 25th International Symposium, DISC 2011", series="Lecture Notes in Computer Science", volume=6950, publisher="Springer-Verlag", pages={363--373}, }
Dan Alistarh and James Aspnes. Sub-logarithmic test-and-set against a weak adversary. Distributed Computing: 25th International Symposium, DISC 2011. Lecture Notes in Computer Science 6950, Springer-Verlag, September 2011, pp. 97–109.
A randomized implementation is given of a test-and-set register with O(log log n) individual step complexity and O(n) total step complexity against an oblivious adversary. The implementation is linearizable and multi-shot, and shows an exponential complexity improvement over previous solutions designed to work against a strong adversary.
@inproceedings{AlistarhA2011, author = {Dan Alistarh and James Aspnes}, title = {Sub-logarithmic test-and-set against a weak adversary}, month = sep, year = 2011, booktitle="Distributed Computing: 25th International Symposium, DISC 2011", series="Lecture Notes in Computer Science", volume=6950, publisher="Springer-Verlag", pages={97--109}, }
Dan Alistarh, James Aspnes, Keren Censor-Hillel, Seth Gilbert, and Morteza Zadimoghaddam. Optimal-time adaptive tight renaming, with applications to counting. Thirtieth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, June 2011, pp. 239–248.
We give two new randomized algorithms for strong renaming, both of which work against an adaptive adversary in asynchronous shared memory. The first uses repeated sampling over a sequence of arrays of decreasing size to assign unique names to each of n processes with step complexity O(log³ n). The second transforms any sorting network into a strong adaptive renaming protocol, with an expected cost equal to the depth of the sorting network. Using an AKS sorting network, this gives a strong adaptive renaming algorithm with step complexity O(log k), where k is the contention in the current execution. We show this to be optimal based on a classic lower bound of Jayanti. We also show that any such strong renaming protocol can be used to build a monotone-consistent counter with logarithmic step complexity (at the cost of adding a max register) or a linearizable fetch-and-increment register (at the cost of increasing the step complexity by a logarithmic factor).
@inproceedings{AlistarhACHGZ2011, author = {Dan Alistarh and James Aspnes and Keren Censor-Hillel and Seth Gilbert and Morteza Zadimoghaddam}, title = {Optimal-time adaptive tight renaming, with applications to counting}, month = jun, year = 2011, booktitle = {Proceedings of the Thirtieth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing}, pages={239--248}, }
James Aspnes and Faith Ellen. Tight bounds for adopt-commit objects. Theory of Computing Systems 55(3):451-474. (SPAA 2011 special issue). An earlier version appeared in 23rd Annual ACM Symposium on Parallelism in Algorithms and Architectures, June 2011, pp. 317–324, under the title “Tight bounds for anonymous adopt-commit objects.”
We give matching upper and lower bounds of Θ(min(log m / log log m, n)) for the individual step complexity of a wait-free m-valued adopt-commit object implemented using multi-writer registers for n anonymous processes. While the upper bound is deterministic, the lower bound holds for randomized adopt-commit objects as well. Our results are based on showing that adopt-commit objects are equivalent, up to small additive constants, to a simpler class of objects that we call conflict detectors.
Our anonymous lower bound also applies to the individual step complexity of m-valued wait-free anonymous consensus, even for randomized algorithms with global coins against an oblivious adversary. The upper bound can be used to slightly improve the cost of randomized consensus in an oblivious-adversary model.
For deterministic non-anonymous implementations of adopt-commit objects, we show a lower bound of Ω(min(log m / log log m, sqrt(log n) / log log n) and an upper bound of O(min(log m / log log m, log n) on the worst-case individual step complexity. For randomized non-anonymous implementations, we show that any execution contains at least one process whose steps exceed the deterministic lower bound.
@article{AspnesE2014, year={2014}, issn={1432-4350}, journal={Theory of Computing Systems}, volume={55}, number={3}, doi={10.1007/s00224-013-9448-1}, title={Tight Bounds for Adopt-Commit Objects}, url={http://dx.doi.org/10.1007/s00224-013-9448-1}, publisher={Springer US}, keywords={Distributed computing; Shared memory; Anonymity; Lower bounds; Covering argument; Adopt-commit; Randomized consensus}, author={Aspnes, James and Ellen, Faith}, pages={451--474}, language={English} }
Dana Angluin, James Aspnes, and Raonne Barbosa Vargas. Mutation systems. International Journal of Computer Mathematics 90(6):1132–1149, June 2013. (LATA 2011 special issue). An earlie version appeared in Language and Automata Theory and Applications: 5th International Conference, LATA 2011, Tarragona, Spain, May 26–31, 2011, Lecture Notes in Computer Science 6638, Springer-Verlag, May 2011, pp. 92–104.
We propose Mutation Systems as a model of the evolution of a string subject to the effects of mutations and a fitness function. One fundamental question about such a system is whether knowing the rules for mutations and fitness, we can predict whether it is possible for one string to evolve into another. To explore this issue we define a specific kind of mutation system with point mutations and a fitness function based on conserved strongly k-testable string patterns. We show that for k ≥ 2, such systems can simulate computation by both finite state machines and cellular automata. The cellular automaton simulation shows that in this framework, universal computation is possible and the question of whether one string can evolve into another is undecidable. We also analyze the efficiency of the finite state machine simulation assuming random point mutations.
@article{AngluinABV2013, author = {Dana Angluin and James Aspnes and Barbosa Vargas, Raonne}, title = {Mutation systems}, month = jun, year = 2013, journal = {International Journal of Computer Mathematics}, volume = 90, number = 6, pages = {1132--1149} }
James Aspnes. Slightly smaller splitter networks. Available as YALEU/DCS/TR-1438, November 2010, and as arXiv:1011.3170.
The classic renaming protocol of Moir and Anderson (1995) uses a network of Θ(n²) splitters to assign unique names to n processes with unbounded initial names. We show how to reduce this bound to Θ(n^{3/2}) splitters.
@techreport{Aspnes2010splitters, author = {James Aspnes}, title = {Slightly smaller splitter networks}, institution="Yale University Department of Computer Science", number="YALEU/DCS/TR-1438", month=nov, year = 2010 }
Dana Angluin, James Aspnes, and Lev Reyzin. Inferring social networks from outbreaks. Algorithmic Learning Theory, 21st International Conference, Lecture Notes in Computer Science 6331, Springer-Verlag, October 2010, pp. 104–118.
We consider the problem of inferring the most likely social network given connectivity constraints imposed by observations of outbreaks within the network. Given a set of vertices (or agents) V and constraints (or observations) S_{i} ⊆ V, we seek to find a minimum log-likelihood cost (or maximum likelihood) set of edges (or connections) E such that each S_{i} induces a connected subgraph of (V,E). For the offline version of the problem, we prove an Ω(log n) hardness of approximation result for uniform cost networks and give an algorithm that almost matches this bound, even for arbitrary costs. Then we consider the online problem, where the constraints are satisfied as they arrive. We give an O(n log n)-competitive algorithm for the arbitrary cost online problem, which has an Ω(n)-competitive lower bound. We look at the uniform cost case as well and give an O(n^{2/3} log^{2/3}n)-competitive algorithm against an oblivious adversary, as well as an Ω(√n)-competitive lower bound against an adaptive adversary. We examine cases when the underlying network graph is known to be a star or a path, and prove matching upper and lower bounds of Θ(log n) on the competitive ratio for them.
@inproceedings{AngluinAR2010outbreaks, author = {Dana Angluin and James Aspnes and Lev Reyzin}, title = {Inferring social networks from outbreaks}, month=oct, year = 2010, pages = {104--118}, booktitle = {Algorithmic Learning Theory, 21st International Conference, ALT 2010, Canberra, Australia, October 6-8, 2010. Proceedings}, publisher = {Springer}, series = {Lecture Notes in Computer Science}, volume = {6331} }
Dana Angluin, James Aspnes, Rida A. Bazzi, Jiang Chen, David Eisenstat, and Goran Konjevod. Effective storage capacity of labeled graphs. Information and Computation 234:44–56, February 2014. (SSS 2010 special issue). An earlier version appeared in Stabilization, Safety, and Security of Distributed Systems: 12th International Symposium, SSS 2010, New York, NY, USA, September 20--22, 2010. Proceedings, Lecture Notes in Computer Science, volume 6366, Springer-Verlag, September 2010, pp. 573–587, under the title “Storage capacity of labeled graphs.” (Winner, Best Student Paper award.)
We consider the question of how much information can be stored by labeling the vertices of a connected undirected graph G using a constant-size set of labels, when isomorphic labelings are not distinguishable. An exact information-theoretic bound is easily obtained by counting the number of isomorphism classes of labelings of G, which we call the information-theoretic capacity of the graph. More interesting is the effective capacity of members of some class of graphs, the number of states distinguishable by a Turing machine that uses the labeled graph itself in place of the usual linear tape. We show that the effective capacity equals the information-theoretic capacity up to constant factors for trees, random graphs with polynomial edge probabilities, and bounded-degree graphs.
@article{AngluinABCEK2014, author = {Dana Angluin and James Aspnes and Rida A. Bazzi and Jiang Chen and David Eisenstat and Goran Konjevod}, title = {Effective storage capacity of labeled graphs}, month=feb, journal={Information and Computation}, year = 2014, volume=234, pages={44--56} }
James Aspnes. A modular approach to shared-memory consensus, with applications to the probabilistic-write model. Distributed Computing 25(2):179–188, May 2012. (PODC 2010 special issue.) An earlier version appeared in Twenty-Ninth Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, July 2010, pp. 460–467.
We show that consensus can be solved by an alternating sequence of adopt-commit objects (Gafni, 1998; Alistarh et al. 2009), which detect agreement, and conciliators, which ensure agreement with some probability. We observe that most known randomized consensus algorithms have this structure.
We give a deterministic implementation of an m-valued adopt-commit object for an unbounded number of processes that uses lg m + Θ(log log m) space and individual work. We also give a randomized conciliator for any number of values in the probabilistic-write model with n processes that guarantees agreement with constant probability while using one multi-writer register, O(log n) expected individual work, and Θ(n) expected total work. Combining these objects gives a consensus protocol for the probabilistic-write model that uses O(log n) individual work and O(n log m) total work. No previous protocol in this model uses sublinear individual work or linear total work for constant m.
@article{Aspnes2012modular, author = {James Aspnes}, title = {A modular approach to shared-memory consensus, with applications to the probabilistic-write model}, month=may, year = 2012, journal="Distributed Computing", volume=25, number=2, pages={179--188} }
James Aspnes, Eric Blais, Murat Demirbas, Ryan O'Donnell, Atri Rudra, and Steve Uurtamo. k^{+} decision trees. Sixth International Workshop on Algorithms for Sensor Systems, Revised Selected Papers, Lecture Notes in Computer Science 6451, Springer-Verlag, July 2010, pp. 74–88.
Consider a wireless sensor network in which each sensor has a bit of information. Suppose all sensors with the bit 1 broadcast this fact to a basestation. If zero or one sensors broadcast, the basestation can detect this fact. If two or more sensors broadcast, the basestation can only detect that there is a "collision." Although collisions may seem to be a nuisance, they can in some cases help the basestation compute an aggregate function of the sensors' data.
Motivated by this scenario, we study a new model of computation for boolean functions: the 2^{+} decision tree. This model is an augmentation of the standard decision tree model: now each internal node queries an arbitrary set of literals and branches on whether 0, 1, or at least 2 of the literals are true. This model was suggested in a work of Ben-Asher and Newman but does not seem to have been studied previously.
Our main result shows that 2^{+} decision trees can "count" rather effectively. Specifically, we show that zero-error 2^{+} decision trees can compute the threshold-of-t symmetric function with O(t) expected queries (and that Ω(t) is a lower bound even for two-sided error 2^{+} decision trees). Interestingly, this feature is not shared by 1^{+} decision trees. Our result implies that the natural generalization to k^{+} decision trees does not give much more power than 2^{+} decision trees. We also prove a lower bound of Ω~(t⋅log(n/t)) for the deterministic 2^{+} complexity of the threshold-of-t function, demonstrating that the randomized 2^{+} complexity can in some cases be unboundedly better than deterministic 2^{+} complexity.
Finally, we generalize the above results to arbitrary symmetric functions, and we discuss the relationship between k^{+} decision trees and other complexity notions such as decision tree rank and communication complexity.
@inproceedings{AspnesBDORU2010, author = {James Aspnes and Eric Blais and Murat Demirbas and Ryan O'Donnell and Atri Rudra and Steve Uurtamo}, title = {$k^+$ decision trees}, month=jul, year = 2010, booktitle={Algorithms for Sensor Systems: 6th International Workshop on Algorithms for Sensor Systems, Wireless Ad Hoc Networks, and Autonomous Mobile Entities, ALGOSENSORS 2010, Bordeaux, France, July 5, 2010, Revised Selected Papers}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, volume = 6451, pages = {74--88} }
James Aspnes, David Eisenstat, and Yitong Yin. Low contention data structures. Journal of Parallel and Distributed Computing, 72(5):705–715, May 2012. An earlier version appeared in Twenty-Second ACM Symposium on Parallelism in Algorithms and Architectures, June 2010, pp. 345–354.
We consider the problem of minimizing contention in static (read-only) dictionary data structures, where contention is measured with respect to a fixed query distribution by the maximum expected number of probes to any given cell. The query distribution is known by the algorithm that constructs the data structure but not by the algorithm that queries it. Assume that the dictionary has n items. When all queries in the dictionary are equiprobable, and all queries not in the dictionary are equiprobable, we show how to construct a data structure in O(n) space where queries require $O(1)$ probes and the contention is O(1/n). Asymptotically, all of these quantities are optimal. For arbitrary query distributions, we construct a data structure in O(n) space where each query requires O(log n / log log n) probes and the contention is O(log n / (n log log n)). The lack of knowledge of the query distribution by the query algorithm prevents perfect load leveling in this case: for a large class of algorithms, we present a lower bound, based on VC-dimension, that shows that for a wide range of data structure problems, achieving contention even within a polylogarithmic factor of optimal requires a cell-probe complexity of Ω(log log n).
@article{AspnesEY2012, author = {James Aspnes and David Eisenstat and Yitong Yin}, title = {Low-contention data structures}, month=may, year = 2012, journal={Journal of Parallel and Distributed Computing}, volume=72, number=5, pages = {705--715} }
James Aspnes, Hagit Attiya, and Keren Censor. Combining shared coin algorithms. Journal of Parallel and Distributed Computing 70(3):317–322, March 2010.
This paper shows that shared coin algorithms can be combined to optimize several complexity measures, even in the presence of a strong adversary. By combining shared coins of Bracha and Rachman and of Aspnes and Waarts, this yields a shared coin algorithm, and hence, a randomized consensus algorithm, with O(n log² n) individual work and O(n² log n) total work, using single-writer registers. This improves upon each of the above shared coins (where the former has a high cost for individual work, while the latter reduces it but pays in the total work), and is currently the best for this model.
Another application is to prove a construction of Saks, Shavit, and Woll, which combines a shared coin algorithm that takes O(1) time in failure-free executions, with one that takes (log n) time in executions where at most √n process fail, and another one O((n³)/(n-f)) time in any other execution.
@article{AspnesAC2010, author = {James Aspnes and Hagit Attiya and Keren Censor}, title = {Combining shared coin algorithms}, month=mar, year = 2010, journal = {Journal of Parallel and Distributed Computing}, volume = 70, number = 3, pages = {317--322} }
James Aspnes, Hagit Attiya, and Keren Censor-Hillel. Polylogarithmic concurrent data structures from monotone circuits. Journal of the Association for Computing Machinery, 59(1):2:1–2:24, February 2012. An earlier version appeared in Twenty-Eighth Annual ACM Symposium on Principles of Distributed Computing, August 2009, pp. 36–45, under the title “Max registers, counters, and monotone circuits.”
A method is given for constructing a max register, a linearizable, wait-free concurrent data structure that supports a write operation and a read operation that returns the largest value previously written. For fixed m, an m-valued max register can be constructed from one-bit multi-writer multi-reader registers at a cost of at most ⌈lg m⌉ atomic register operations per write or read.
It is also shown how a max register can be used to transform any monotone circuit into a wait-free concurrent data structure that provides write operations setting the inputs to the circuit and a read operation that returns the value of the circuit on the largest input values previously supplied. The cost of a write is bounded by O(S d min(⌈lg m⌉, n), where m is the size of the alphabet for the circuit, S is the number of gates whose value changes as the result of the write, and d is the number of inputs to each gate; the cost of a read is min(⌈lg m⌉, O(n)). While the resulting data structure is not linearizable in general, it satisfies a weaker but natural consistency condition. As an application, we obtain a simple, linearizable, wait-free counter implementation with a cost of O(min(log n log v, n)) to perform an increment and O(min(log v, n)) to perform a read, where v is the current value of the counter. For polynomially-many increments, this becomes O(log² n), an exponential improvement on the best previously known upper bounds of O(n) for an exact counting and O(n^{4/5+ε}) for approximate counting.
Finally, it is shown that the upper bounds are almost optimal. We prove that min(⌈lg m⌉, n-1) is a lower bound on the worst-case complexity for any solo-terminating deterministic implementation of an m-valued bounded max register, which is exactly equal to the upper bound for m ≤ 2^{n-1}. The same bound also holds m-valued counters. Furthermore, even in a solo-terminating randomized implementation of an n-valued max register with an oblivious adversary and global coins, there exist simple schedules containing n-1 partial write operations and one read operation in which, with high probability, the worst-case step complexity of a read operation is Ω(log n / log log n) if the write operations have polylogarithmic step complexity.
@article{AspnesAC2012, author = {James Aspnes and Hagit Attiya and Keren Censor-Hillel}, title = {Polylogarithmic concurrent data structures from monotone circuits}, journal=jacm, month=feb, year = 2012, volume=59, number=1, pages={2:1--2:24} }
James Aspnes and Keren Censor. Approximate shared-memory counting despite a strong adversary. ACM Transactions on Algorithms 6(2):1–23, March 2010 (SODA 2009 special issue). An earlier version appeared in Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2009, pp. 441–450.
A new randomized asynchronous shared-memory data structure is given for implementing an approximate counter that can be incremented once by each of n processes in a model that allows up to n-1 crash failures. For any fixed ε, the counter achieves a relative error of δ with high probability, at the cost of O(((1/δ) log n)^{O(1/ε)}) register operations per increment and O(n^{4/5+ε}((1/δ) log n)^{O(1/ε)}) register operations per read. The counter combines randomized sampling for estimating large values with an expander for estimating small values. This is the first counter implementation that is sublinear the number of processes and works despite a strong adversary scheduler that can observe internal states of processes.
An application of the improved counter is an improved protocol for solving randomized shared-memory consensus, which reduces the best previously known individual work complexity from O(n log n) to an optimal O(n), resolving one of the last remaining open problems concerning consensus in this model.
@article{AspnesC2010, author = {Aspnes, James and Censor, Keren}, title = {Approximate shared-memory counting despite a strong adversary}, journal = {ACM Transactions on Algorithms}, volume = {6}, number = {2}, year = {2010}, issn = {1549-6325}, pages = {1--23}, doi = {http://doi.acm.org/10.1145/1721837.1721841}, publisher = {ACM}, address = {New York, NY, USA}, }
Dana Angluin, James Aspnes, and Lev Reyzin. Optimally learning social networks with activations and suppressions. Nineteenth International Conference on Algorithmic Learning Theory, Lecture Notes in Computer Science 5254, Springer-Verlag, October 2008, pp. 272–286. Theoretical Computer Science 411(29–30):2729–2740. (ALT 2008 special issue).
In this paper we consider the problem of learning hidden independent cascade social networks using exact value injection queries. These queries involve activating and suppressing agents in the target network. We develop an algorithm that optimally learns an arbitrary social network of size n using O(n²) queries, matching the information theoretic lower bound we prove for this problem. We also consider the case when the target social network forms a tree, and we show that the learning problem takes Θ(n log n) queries. We also give an approximation algorithm for finding an influential set of nodes in the network, without resorting to learning its structure. Finally, we discuss some limitations of our approach, and limitations of path-based methods, when non-exact value injection queries are used.
@article{AngluinAR2010networks, author = {Dana Angluin and James Aspnes and Lev Reyzin}, title = {Optimally learning social networks with activations and suppressions}, month=jun, year = 2010, journal = {Theoretical Computer Science}, volume = 411, number = {29--30}, pages = {2729--2740} }
James Aspnes, Hagit Attiya, and Keren Censor. Randomized consensus in expected O(n log n) individual work. In Twenty-Seventh annual ACM Symposium on Principles of Distributed Computing, August 2008, pp. 325–334.
This paper presents a new randomized algorithm for achieving consensus among asynchronous processes that communicate by reading and writing shared registers, in the presence of a strong adversary. The fastest previously known algorithm requires a process to perform an expected O(n log² n) read and write operations in the worst case. In our algorithm, each process executes at most an expected O(n log n) read and write operations. It is shown that shared coin algorithms can be combined together to yield an algorithm with O(n log n) individual work and O(n²) total work.
@inproceedings{AspnesAC2008, author = {James Aspnes and Hagit Attiya and Keren Censor}, title = {Randomized consensus in expected {$O(n \log n)$} individual work}, month=aug, year = 2008, booktitle = {PODC '08: Proceedings of the Twenty-Seventh ACM Symposium on Principles of Distributed Computing}, pages = {325--334} }
Dana Angluin, James Aspnes, Jiang Chen, David Eisenstat, and Lev Reyzin. Learning acyclic probabilistic circuits using test paths. Journal of Machine Learning Research, 10(Aug):1881–1911, 2009. An earlier version appeared in Twenty-First International Conference on Learning Theory (COLT), July 2008, pp. 169–179.
We define a model of learning probabilistic acyclic circuits using value injection queries, in which an arbitrary subset of wires is set to fixed values, and the value on the single output wire is observed. We adapt the approach of using test paths from the Circuit Builder algorithm (Angluin et al., STOC 2006) to show that there is a polynomial time algorithm that uses value injection queries to learn Boolean probabilistic circuits of constant fan-in and logarithmic depth. In the process, we discover that test paths fail utterly for circuits over alphabets of size greater than two and establish upper and lower bounds on the attenuation factor of test paths versus general experiments for general and transitively reduced Boolean probabilistic circuits. To overcome the limitations of test paths for non-Boolean alphabets, we introduce function injection queries, which allow the symbols on a wire to be mapped to other symbols rather than just to themselves or constants.
@article{AngluinACER2009, author = {Dana Angluin and James Aspnes and Jiang Chen and David Eisenstat and Lev Reyzin}, title = {Learning acyclic probabilistic circuits using test paths}, month=aug, year = 2009, journal = {Journal of Machine Learning Research}, pages = {1881--1911}, volume = 10, number = {Aug} }
James Aspnes, Muli Safra, and Yitong Yin. Ranged hash functions and the price of churn. Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2008, pp. 1066–1075.
Ranged hash functions generalize hash tables to the setting where hash buckets may come and go over time, a typical case in distributed settings where hash buckets may correspond to unreliable servers or network connections. Monotone ranged hash functions are a particular class of ranged hash functions that minimize item reassignments in response to churn: changes in the set of available buckets. The canonical example of a monotone ranged hash function is the ring-based consistent hashing mechanism of Karger et al. (1997). These hash functions give a maximum load of Θ((n/m) log m) when n is the number of items and m is the number of buckets. The question of whether some better bound could be obtained using a more sophisticated hash function has remained open. We resolve this question by showing two lower bounds. First, the maximum load of any randomized monotone ranged hash function is Ω(√((n/m) log m)) when n=o(m log m). This bound covers almost all of the nontrivial case, because when n = Ω(m log m), simple random assignment matches the trivial lower bound of Ω(n/m). We give a matching (though impractical) upper bound that shows that our lower bound is tight over almost all of its range. Second, for randomized monotone ranged hash functions derived from metric spaces, there is a further trade-off between the expansion factor of the metric and the load balance, which for the special case of growth-restricted metrics gives a bound of Ω((n/m) log m), asymptotically equal to that of consistent hashing. These are the first known non-trivial lower bounds for ranged hash functions. They also explain why in ten years no better ranged hash functions have arisen to replace consistent hashing.
@inproceedings{AspnesSY2008, author = {James Aspnes and Muli Safra and Yitong Yin}, title = {Ranged hash functions and the price of churn}, month = jan, year = {2008}, booktitle="Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms", pages = {1066--1075} }
James Aspnes and Yinghua Wu. O(log n)-time overlay network construction from graphs with out-degree 1. Principles of Distributed Systems; 11th International Conference, OPODIS 2007, Gaudaloupe, French West Indies, December 17–20, 2007, Proceedings. Lecture Notes in Computer Science 4878. Springer-Verlag, December 2007, pp. 286–300.
A fast self-stabilizing algorithm is described to rapidly construct a balanced overlay network from a directed graph initially with out-degree 1, a natural starting case that arises in peer-to-peer systems where each node attempts to join by contacting some single other node. This algorithm constructs a balanced search tree in time O(W+log n), improving by a factor of log n on the previous bound starting from a general graph, while retaining the properties of low contention and short messages. Our construction includes an improved version of the distributed Patricia tree structure of (Angluin et al., 2005) that we call a double-headed radix tree. This data structure responds gracefully to node failures and supports search, predecessor, and successor operations in O(W) time with smoothly distributed load for predecessor and successor operations. Though the resulting tree data structure is highly vulnerable to disconnection due to failures, the fast predecessor and successor operations (as shown in previous work) can be used to quickly construct standard overlay networks with more redundancy.
@inproceedings{AspnesW2007, author = {James Aspnes and Yinghua Wu}, title = {{$O(\log n)$}-time overlay network construction from graphs with out-degree {$1$}}, month = dec, year = 2007, booktitle = {Principles of Distributed Systems; 11th International Conference, OPODIS 2007, Gaudaloupe, French West Indies, December 17--20, 2007. Proceedings}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, volume = 4878, pages={286--300}}
James Aspnes, Navin Rustagi, and Jared Saia. Worm versus alert: Who wins in a battle for control of a large-scale network? . Principles of Distributed Systems; 11th International Conference, OPODIS 2007, Gaudaloupe, French West Indies, December 17–20, 2007, Proceedings. Lecture Notes in Computer Science 4878. Springer-Verlag, December 2007, pp. 443–456.
Consider the following game between a worm and an alert over a network of n nodes. Initially, no nodes are infected or alerted and each node in the network is a special detector node independently with small but constant probability. The game starts with a single node becoming infected. In every round thereafter, every infected node sends out a constant number of worms to other nodes in the population, and every alerted node sends out a constant number of alerts. Nodes in the network change state according to the following three rules: 1) If a worm is received by a node that is not a detector and is not alerted, that node becomes infected; 2) If a worm is received by a node that is a detector, that node becomes alerted; 3) If an alert is received by a node that is not infected, that node becomes alerted.
We make two assumptions about this game. First, that an infected node can send worm messages to any other node in the network but, in contrast, an alerted node can send alert messages only through a previously determined, constant degree overlay network. Second, we assume that the infected nodes are intelligent, coordinated and essentially omniscient. In other words, the infected nodes know everything except for which nodes are detectors and the alerted nodes' random coin flips i.e. they know the topology of the overlay network used by the alerts; which nodes are alerted and which are infected at any time; where alerts and worms are being sent; the overall strategy used by the alerted nodes; etc. The alerted nodes are assumed to know nothing about which other nodes are infected or alerted, where alerts or worms are being sent, or the strategy used by the infected nodes.
Is there a strategy for the alerted nodes that ensures only a vanishingly small fraction of the nodes become infected, no matter what strategy is used by the infected nodes? Surprisingly, the answer is yes. In particular, we prove that a simple strategy achieves this result with probability approaching 1 provided that the overlay network has good node expansion. Specifically, this result holds if d ≥ α and α / (β(1-γ)} > 2d/c, where α and β represent the rate of the spread of the alert and worm respectively; γ is the probability that a node is a detector node; d is the degree of the overlay network; and c is the node expansion of the overlay network. Next, we give empirical results that suggest that our algorithms for the alert may be useful in current large-scale networks. Finally, we show that if the overlay network has poor expansion, in particular if (1-γ)β > d, then the worm will likely infect almost all of the non-detector nodes.
@inproceedings{AspnesRS2007, author = {James Aspnes and Navin Rustagi and Jared Saia}, title = {Worm Versus Alert: Who Wins in a Battle for Control of a Large-Scale Network?}, month = dec, year = 2007, booktitle = {Principles of Distributed Systems; 11th International Conference, OPODIS 2007, Gaudaloupe, French West Indies, December 17--20, 2007. Proceedings}, publisher = {Springer-Verlag}, series = {Lecture Notes in Computer Science}, volume = 4878, pages={443--456}}
Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of population protocols. Distributed Computing 20(4):279–304, November 2007. (PODC 2006 special issue.) Incorporates material previously appearing in OPODIS 2005 and PODC 2006. Available as arXiv:cs.CC/0608084.
We consider the model of population protocols introduced by Angluin et al., in which anonymous finite-state agents stably compute a predicate of the multiset of their inputs via two-way interactions in the all-pairs family of communication networks. We prove that all predicates stably computable in this model (and certain generalizations of it) are semilinear, answering a central open question about the power of the model. Removing the assumption of two-way interaction, we also consider several variants of the model in which agents communicate by anonymous message-passing where the recipient of each message is chosen by an adversary and the sender is not identified to the recipient. These one-way models are distinguished by whether messages are delivered immediately or after a delay, whether a sender can record that it has sent a message, and whether a recipient can queue incoming messages, refusing to accept new messages until it has had a chance to send out messages of its own. We characterize the classes of predicates stably computable in each of these one-way models using natural subclasses of the semilinear predicates.
@article(AngluinAER2007, title="The computational power of population protocols", author="Dana Angluin and James Aspnes and David Eisenstat and Eric Ruppert", journal={Distributed Computing}, volume=20, number=4, pages={279--304}, month = nov, year = 2007, )
James Aspnes and Eric Ruppert. An introduction to population protocols. Bulletin of the European Association for Theoretical Computer Science, Distributed Computing Column, 93:98–117, October 2007. An updated and extended version appears in Middleware for Network Eccentric and Mobile Applications, Benoît Garbinato, Hugo Miranda, and Luís Rodrigues, eds., Springer-Verlag, 2009, pp. 97–120.
The population protocol model describes a collection of tiny mobile agents that interact with one another to carry out a computation. The agents are identically programmed finite state machines. Interactions between pairs of agents cause the two agents to update their states. These interactions are scheduled by an adversary, subject to a fairness constraint. Input values are initially distributed to the agents, and the agents must eventually converge to the correct output value. This framework can be used to model mobile ad hoc networks of tiny devices or collections of molecules undergoing chemical reactions. We survey results that describe what can be computed in various versions of the population protocol model.
@article{AspnesR2007, author = {James Aspnes and Eric Ruppert}, title = {An introduction to population protocols}, journal = {Bulletin of the European Association for Theoretical Computer Science}, volume=93, pages={98--117}, month=oct, year = 2007} @incollection{AspnesR2009, author = {James Aspnes and Eric Ruppert}, title = {An introduction to population protocols}, booktitle = {Middleware for Network Eccentric and Mobile Applications}, editor = {Beno\^it Garbinato and Hugo Miranda and Lu\'is Rodrigues}, pages={97--120}, publisher = {Springer-Verlag}, year = 2009}
Dana Angluin, James Aspnes, and David Eisenstat. A simple population protocol for fast robust approximate majority. Distributed Computing 21(2):87–102, July 2008 (DISC 2007 special issue). An earlier version appeared in Distributed Computing, 21st International Symposium, DISC 2007, Lemesos, Cyprus, September 24–26, 2007, Proceedings, pp. 20–32.
We describe and analyze a 3-state one-way population protocol for approximate majority in the model in which pairs of agents are drawn uniformly at random to interact. Given an initial configuration of x's, y's and blanks that contains at least one non-blank, the goal is for the agents to reach consensus on one of the values x or y. Additionally, the value chosen should be the majority non-blank initial value, provided it exceeds the minority by a sufficient margin. We prove that with high probability the agents reach consensus in time O(log n) and the value chosen is the majority provided that its initial margin is at least ω(sqrt(n) log n). This protocol has the additional property of tolerating Byzantine behavior in o(sqrt(n)) of the agents, making it the first known population protocol that tolerates Byzantine agents.
@article{AngluinAE2008majority, author = {Dana Angluin and James Aspnes and David Eisenstat}, title = {A simple population protocol for fast robust approximate majority}, journal = {Distributed Computing}, volume=21, number=2, month = jul, year = 2008, pages= {87--102}, }
Dana Angluin, James Aspnes, Jiang Chen, and Lev Reyzin. Learning large-alphabet and analog circuits with value injection queries. Machine Learning 72(1–2):113–138, August 2008 (COLT 2007 special issue). An earlier version appeared in Twentieth Annual Conference on Learning Theory, June 2007, pp. 51–65. Winner, Best Student Paper award.
We consider the problem of learning an acyclic discrete circuit with n wires, fan-in bounded by k and alphabet size s using value injection queries. For the class of transitively reduced circuits, we develop the Distinguishing Paths Algorithm, that learns such a circuit using (ns)^{O(k)} value injection queries and time polynomial in the number of queries. We describe a generalization of the algorithm to the class of circuits with shortcut width bounded by b that uses (ns)^{O(k+b)} value injection queries. Both algorithms use value injection queries that fix only O(kd) wires, where d is the depth of the target circuit. We give a reduction showing that without such restrictions on the topology of the circuit, the learning problem may be computationally intractable when s = n^{Θ(1)}, even for circuits of depth O(log n). We apply our large-alphabet learning algorithms to the problem of approximate learning of analog circuits and show that analog circuits with constant bounded fan-in, logarithmic depth, and constant bounded shortcut width whose gate functions satisfy a Lipschitz condition are approximately learnable in polynomial time using value injection queries. Finally, we consider models in which behavioral equivalence queries are also available, and extend and improve the learning algorithms of (Angluin et al., 2006) to handle general classes of gates functions that are polynomial time learnable from counterexamples.
@article{AngluinACR2008, author = {Dana Angluin and James Aspnes and Jiang Chen and Lev Reyzin}, title = {Learning large-alphabet and analog circuits with value injection queries}, journal = {Machine Learning}, volume = {72}, number = {1-2}, month = aug, year = {2008}, pages = {113-138}, }
James Aspnes, Yang Richard Yang, and Yitong Yin. Path-independent load balancing with unreliable machines. Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2007, pp. 814–823. Available as arXiv:cs.DS/0607026.
We consider algorithms for load balancing on unreliable machines. The objective is to optimize the two criteria of minimizing the makespan and minimizing job reassignments in response to machine failures. We assume that the set of jobs is known in advance but that the pattern of machine failures is unpredictable. Motivated by the requirements of BGP routing, we consider path-independent algorithms, with the property that the job assignment is completely determined by the subset of available machines and not the previous history of the assignments. We examine first the question of performance measurement of path-independent load-balancing algorithms, giving the measure of makespan and the normalized measure of reassignments cost. We then describe two classes of algorithms for optimizing these measures against an oblivious adversary for identical machines. The first, based on independent random assignments, gives expected reassignment costs within a factor of 2 of optimal and gives a makespan within a factor of O(log m/log log m) of optimal with high probability, for unknown job sizes. The second, in which jobs are first grouped into bins and at most one bin is assigned to each machine, gives constant-factor ratios on both reassignment cost and makespan, for known job sizes. Several open problems are discussed.
@inproceedings{AspnesYY2007, author = {James Aspnes and Yang Richard Yang and Yitong Yin}, title = {Path-independent load balancing with unreliable machines}, month = jan, year = {2007}, booktitle="Eighteenth Annual ACM-SIAM Symposium on Discrete Algorithms", pages = {814--823} }
J. Aspnes, T. Eren, D. K. Goldenberg, A. S. Morse, W. Whiteley, Y. R. Yang, B. D. O. Anderson, and P. N. Belhumeur. A theory of network localization. IEEE Transactions on Mobile Computing, 12(5):1663–1678, December 2006.
In this paper we provide a theoretical foundation for the problem of network localization in which some nodes know their locations and other nodes determine their locations by measuring the distances to their neighbors. We construct grounded graphs to model network localization and apply graph rigidity theory to test the conditions for unique localizability and to construct uniquely localizable networks. We further study the computational complexity of network localization and investigate a subclass of grounded graphs where localization can be computed efficiently. We conclude with a discussion of localization in sensor networks where the sensors are placed randomly.
@article(AspnesEGMWYAB2006, title="A theory of network localization", author="J. Aspnes and T. Eren and D. K. Goldenberg and A. S. Morse and W. Whiteley and Y. R. Yang and B. D. O. Anderson and P. N. Belhumeur", journal="IEEE Transactions on Mobile Computing", volume=5, number=12, pages={1663--1678}, month=dec, year=2006 )
James Aspnes, Costas Busch, Shlomi Dolev, Panagiota Fatourou, Chryssis Georgiou, Alex Shvartsman, Paul Spirakis, and Roger Wattenhofer. Eight open problems in distributed computing. Bulletin of the European Association for Theoretical Computer Science, Distributed Computing Column, 90:109–126, October 2006.
Distributed Computing Theory continues to be one of the most active research fields in Theoretical Computer Science today. Besides its foundational topics (such as consensus and synchronization), it is currently being enriched with many new topics inspired from modern technological advances (e.g., the Internet). In this note, we present eight open problems in Distributed Computing Theory that span a wide range of topics-both classical and modern.
@article{AspnesBD+2006, author = {James Aspnes and Costas Busch and Shlomi Dolev and Panagiota Fatourou and Chryssis Georgiou and Alex Shvartsman and Paul Spirakis and Roger Wattenhofer}, title = {Eight open problems in distributed computing}, journal = {Bulletin of the European Association for Theoretical Computer Science}, volume = 90, pages = {109--126}, month = oct, year = 2006}
Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by population protocols with a leader. Distributed Computing 21(2):183–199, July 2008 (DISC 2007 special issue). An earlier version appeared in Distributed Computing, 20th International Symposium, DISC 2006, pp. 61–75. Available as YALEU/DCS/TR-1332, May 2006.
Fast algorithms are presented for performing computations in a probabilistic population model. This is a variant of the standard population protocol model—in which finite-state agents interact in pairs under the control of an adversary scheduler—where all pairs are equally likely to be chosen for each interaction. It is shown that when a unique leader agent is provided in the initial population, the population can simulate a virtual register machine with high probability in which standard arithmetic operations like comparison, addition, subtraction, and multiplication and division by constants can be simulated in O(n log^{5} n) interactions using a simple register representation or in O(n log² n) interactions using a more sophisticated representation that requires an extra O(n log^{O(1)} n)-interaction initialization step. The central method is the extensive use of epidemics to propagate information from and to the leader, combined with an epidemic-based phase clock used to detect when these epidemics are likely to be complete. Applications include a reduction of the cost of computing a semilinear predicate to O(n log^{5} n) interactions from the previously best-known bound of O(n² log n) interactions and simulation of a LOGSPACE Turing machine using O(n log² n) interactions per step after an initial O(n log^{O(1)} n)-interaction startup phase. These bounds on interactions translate into polylogarithmic time per step in a natural parallel model in which each agent participates in an expected Θ(1) interactions per time unit. Open problems are discussed, together with simulation results that suggest the possibility of removing the initial-leader assumption.
@article(AngluinAE2008fast, title="Fast computation by population protocols with a leader", author="Dana Angluin and James Aspnes and David Eisenstat", journal="Distributed Computing", volume=21, number=3, month=sep, year=2008, pages={183--199} )
Dana Angluin, James Aspnes, and David Eisenstat. Stably computable predicates are semilinear. Twenty-Fifth ACM Symposium on Principles of Distributed Computing, July 2006, pp. 292–299.
We consider the model of population protocols introduced by Angluin et. al., in which anonymous finite-state agents stably compute a predicate of their inputs via two-way interactions in the all-pairs family of communication networks. We prove that all predicates stably computable in this model (and certain generalizations of it) are semilinear, answering a central open question about the power of the model.
@inproceedings{AngluinAE2006semilinear, author = {Dana Angluin and James Aspnes and David Eisenstat}, title = {Stably computable predicates are semilinear}, booktitle = {PODC '06: Proceedings of the twenty-fifth annual ACM symposium on Principles of distributed computing}, year = {2006}, isbn = {1-59593-384-0}, pages = {292--299}, location = {Denver, Colorado, USA}, doi = {http://doi.acm.org/10.1145/1146381.1146425}, publisher = {ACM Press}, address = {New York, NY, USA}, }
Dana Angluin, James Aspnes, Jiang Chen, and Yinghua Wu. Learning a circuit by injecting values. Journal of Computer and System Sciences 75(1):60–77, January 2009. (Special Issue: Learning Theory 2006.) An earlier version appeared in Thirty-Eighth Annual ACM Symposium on Theory of Computing, May 2006, pp. 584–593.
We propose a new model for exact learning of acyclic circuits using experiments in which chosen values may be assigned to an arbitrary subset of wires internal to the circuit, but only the value of the circuit's single output wire may be observed. We give polynomial time algorithms to learn (1) arbitrary circuits with logarithmic depth and constant fan-in and (2) boolean circuits of constant depth and unbounded fan-in over AND, OR, and NOT gates. Thus, both AC0 and NC1 circuits are learnable in polynomial time in this model. Negative results show that some restrictions on depth, fan-in and gate types are necessary: exponentially many experiments are required to learn AND/OR circuits of unbounded depth and fan-in; it is NP-hard to learn AND/OR circuits of unbounded depth and fan-in 2; and it is NP-hard to learn circuits of bounded depth and unbounded fan-in over AND, OR, and threshold gates, even when the target circuit is known to contain at most one threshold gate and that threshold gate has threshold 2. We also consider the effect of adding an oracle for behavioral equivalence. In this case there are polynomial-time algorithms to learn arbitrary circuits of constant fan-in and unbounded depth and to learn boolean circuits with arbitrary fan-in and unbounded depth over AND, OR, and NOT gates. A corollary is that these two classes are PAC-learnable if experiments are available.
@article{AngluinACW2009, author = {Dana Angluin and James Aspnes and Jiang Chen and Yinghua Wu}, title = {Learning a circuit by injecting values}, journal = {Journal of Computer and System Sciences}, month = jan, year = {2009}, pages = {60--77}, volume = 75, number = 1, }
Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. On the power of anonymous one-way communication. Principles of Distributed Systems; 9th International Conference, OPODIS 2005; Pisa, Italy; December 2005; Revised Selected Papers, December 2005, pp. 396–411.
We consider a network of anonymous processes communicating via anonymous message-passing, where the recipient of each message is chosen by an adversary and the sender is not identified to the recipient. Even with unbounded message sizes and process states, such a system can compute only limited predicates on inputs held by the processes. In the finite-state case, we show how the exact strength of the model depends critically on design choices that are irrelevant in the unbounded-state case, such as whether messages are delivered immediately or after a delay, whether a sender can record that it has sent a message, and whether a recipient can queue incoming messages, refusing to accept new messages until it has had a chance to send out messages of its own. These results may have implications for the design of distributed systems where processor power is severely limited, as in sensor networks.
@inproceedings(AngluinAER2005, title="On the power of anonymous one-way communication", author="Dana Angluin and James Aspnes and David Eisenstat and Eric Ruppert", booktitle="Principles of Distributed Systems; 9th International Conference, OPODIS 2005; Pisa, Italy; December 2005; Revised Selected Papers", series="Lecture Notes in Computer Science", volume=3974, month=dec, year=2005, pages={396--411} )
Dana Angluin, James Aspnes, Michael J. Fischer, and Hong Jiang. Self-stabilizing population protocols. Principles of Distributed Systems; 9th International Conference, OPODIS 2005; Pisa, Italy; December 2005; Revised Selected Papers, December 2005, pp. 103–117. ACM Transactions on Autonomous and Adaptive Systems 3(4):13. (Special issue on stabilization, safety, and security of distributed systems.)
This paper studies self-stabilization in networks of anonymous, asynchronously interacting agents where the size of the network is unknown. Dijkstra-style round-robin token circulation can be done deterministically with constant space per node in this model. Constant-space protocols are given for leader election in rings, local-addressing in degree-bounded graphs, and establishing consistent global direction in an undirected ring. A protocol to construct a spanning tree in regular graphs using O(log D) memory is also given, where D is the diameter of the graph. A general method for eliminating nondeterministic transitions from the self-stabilizing implementation of a large family of behaviors is used to simplify the constructions, and general conditions under which protocol composition preserves behavior are used in proving their correctness.
@article(AngluinAFJ2008, title="Self-stabilizing population protocols", author="Dana Angluin and James Aspnes and Michael J. Fischer and Hong Jiang", journal="ACM Transactions on Autonomous and Adaptive Systems", volume=3, number=4, pages={13}, month=nov, year=2008, )
Ittai Abraham, James Aspnes, and Jian Yuan. Skip B-trees. Principles of Distributed Systems; 9th International Conference, OPODIS 2005; Pisa, Italy; December 2005; Revised Selected Papers, December 2005, pp. 366–380.
We describe a new data structure, the Skip B-Tree, that combines the advantages of skip graphs with features of traditional B-trees. A skip B-Tree provides efficient search, insertion and deletion operations. The data structure is highly fault tolerant even to adversarial failures, and allows for particularly simple repair mechanisms. Related resource keys are kept in blocks near each other enabling efficient range queries. Using this data structure, we describe a new distributed peer-to-peer network, the Distributed Skip B-Tree. Given m data items stored in a system with n nodes, the network allows to perform a range search operation for r consecutive keys that costs only O(log_{b} m + r/b) where b=Θ(m/n). In addition, our distributed Skip B-tree search network has provable polylogarithmic costs for all its other basic operations like insert, delete, and node join. To the best of our knowledge, all previous distributed search networks either provide a range search operation whose cost is worse than ours or may require a linear cost for some basic operation like insert, delete, and node join.
@inproceedings(AbrahamAY2005, title="Skip {B}-trees", author="Ittai Abraham and James Aspnes and Jian Yuan", booktitle="Principles of Distributed Systems; 9th International Conference, OPODIS 2005; Pisa, Italy; December 2005; Revised Selected Papers", series="Lecture Notes in Computer Science", volume=3974, month=dec, year=2005, pages={366--380} )
James Aspnes, Zoë Diamadi, Kristian Gjøsteen, René Peralta, and Aleksandr Yampolskiy. Spreading alerts quietly and the subgroup escape problem. Journal of Cryptology 28(4):796–819, October 2015. An earlier version appeared in Advances in Cryptology — ASIACRYPT 2005: 11th International Conference on the Theory and Application of Cryptology and Information Security, Chennai, India, December 4–8, 2005. Proceedings. Lecture Notes in Computer Science 3788, Springer-Verlag, December 2005, pp. 253–272. Available as YALEU/DCS/TR-1326, December 2005.
We introduce a new cryptographic primitive called the blind coupon mechanism (BCM). In effect, the BCM is an authenticated bit-commitment, which is AND-homomorphic. We show that the BCM has natural and important applications. In particular, we use it to construct a mechanism for transmitting alerts undetectably in a message-passing system of n nodes. Our algorithms allow an alert to quickly propagate to all nodes without its source or existence being detected by an adversary, who controls all message traffic. Our proofs of security are based on a new subgroup escape problem, which seems hard on certain groups with bilinear pairings and on elliptic curves over the ring ℤ_{n}.
@article(AspnesDGPY2015, title="Spreading alerts quietly and the subgroup escape problem", author="James Aspnes and Zo{\"e} Diamadi and Kristian Gj\o{}steen and Ren\'e Peralta and Aleksandr Yampolskiy", journal="Journal of Cryptology", volume="28", number="4", month=oct, year=2015, pages={796--819} )
James Aspnes and Gauri Shah. Distributed data structures for P2P systems. Theoretical and Algorithmic Aspects of Sensor, Ad Hoc Wireless and Peer-to-Peer Networks, Jie Wu, editor, CRC Press, 2005, pages 685–700.
This article describes some recent distributed data structures that have been proposed to implement peer-to-peer systems for resource sharing. We first give a broad overview of Distributed Hash Tables (DHTs) which form the basic building blocks of many peer-to-peer systems, and describe some of the features of the more popular DHT implementations. We also briefly describe some of the other organized data structures that have been proposed to build peer-to-peer systems. We then focus on describing skip graphs, which overcome some limitations of DHTs such as lack of spatial locality and lack of support for range queries.
@incollection(AspnesS2005, title="Distributed data structures for {P2P} systems", author="James Aspnes and Gauri Shah", booktitle="Theoretical and Algorithmic Aspects of Sensor, Ad Hoc Wireless and Peer-to-Peer Networks", editor="Jie Wu", publisher="CRC press", month=aug, pages={685--700}, year=2005 )
James Aspnes, Collin Jackson, and Arvind Krishnamurthy. Exposing computationally-challenged Byzantine impostors. Available as YALEU/DCS/TR-1332, July 2005.
Internet protocols permit a single machine to masquerade as many, allowing an adversary to appear to control more nodes than it actually does. The possibily of such Sybil attacks has been taken to mean that distributed algorithms that tolerate only a fixed fraction of faulty nodes are not useful in peer-to-peer systems unless identities can be verified externally. The present work argues against this assumption, by presenting practical algorithms for the central distributed computing problem of Byzantine agreement that defend against Sybil attacks by using moderately hard puzzles as a pricing scheme for identities. Though our algorithms do not prevent Sybil attacks entirely, they solve Byzantine agreement (and some useful variants) when the limited fraction of nodes that can fail is replaced by a limited fraction of the total computational power. These results suggest that Byzantine agreement and similar tools from the distributed computing literature are likely to help solve the problem of adversarial behavior by components of peer-to-peer systems.
@techreport(AspnesJK2005, title="Exposing computationally-challenged {B}yzantine impostors", author="James Aspnes and Collin Jackson and Arvind Krishnamurthy", institution="Yale University Department of Computer Science", number="YALEU/DCS/TR-1332", month=jul, year=2005 )
Dana Angluin, James Aspnes, Jiang Chen, Yinghua Wu, and Yitong Yin. Fast construction of overlay networks. Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures, July 2005, pages 145–154.
An asynchronous algorithm is described for rapidly constructing an overlay network in a peer-to-peer system where all nodes can in principle communicate with each other directly through an underlying network, but each participating node initially has pointers to only a handful of other participants. The output of the mechanism is a linked list of all participants sorted by their identifiers, which can be used as a foundation for building various linear overlay networks such as Chord or skip graphs. Assuming the initial pointer graph is weakly-connected with maximum degree d and the length of a node identifier is W, the mechanism constructs a binary search tree of nodes of depth O(W) in expected O(W log n) time using expected O((d+W)n log n) messages of size O(W) each. Furthermore, the algorithm has low contention: at any time there are only O(d) undelivered messages for any given recipient. A lower bound of Ω(d + log n) is given for the running time of any procedure in a related synchronous model that yields a sorted list from a degree-d weakly-connected graph of n nodes. We conjecture that this lower bound is tight and could be attained by further improvements to our algorithms.
@inproceedings(AngluinACWY2005, title="Fast construction of overlay networks", author="Dana Angluin and James Aspnes and Jiang Chen and Yinghua Wu and Yitong Yin", booktitle="Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures", month=jul, year=2005, pages={145--154} )
James Aspnes and Udi Wieder. The expansion and mixing time of skip graphs with applications. Distributed Computing 21(6):385–393, March 2009. An earlier version appeared in Seventeenth Annual ACM Symposium on Parallelism in Algorithms and Architectures, July 2005, pages 126–134.
We prove that with high probability a skip graph contains a 4-regular expander as a subgraph, and estimate the quality of the expansion via simulations. As a consequence skip graphs contain a large connected component even after an adversarial deletion of nodes. We show how the expansion property could be used to sample a node in the skip graph in a highly efficient manner. We also show that the expansion property could be used to load balance the skip graph quickly. Finally it is shown that the skip graph could serve as an unstructured P2P system, thus it is a good candidate for a hybrid P2P system.
@article(AspnesW2009, title="The expansion and mixing time of skip graphs with applications", author="James Aspnes and Udi Wieder", journal="Distributed Computing", month=mar, year=2009, pages={385--393} )
Dana Angluin, James Aspnes, Melody Chan, Michael J. Fischer, Hong Jiang, and René Peralta. Stably computable properties of network graphs. Distributed Computing in Sensor Systems: First IEEE International Conference, DCOSS 2005, Marina del Rey, CA, USE, June/July, 2005, Proceedings, Lecture Notes in Computer Science, volume 3560, Springer-Verlag, June 2005, pp. 63–74.
We consider a scenario in which anonymous, finite-state sensing devices are deployed in an ad-hoc communication network of arbitrary size and unknown topology, and explore what properties of the network graph can be stably computed by the devices. We show that they can detect whether the network has degree bounded by a constant d, and, if so, organize a computation that achieves asymptotically optimal linear memory use. We define a model of stabilizing inputs to such devices and show that a large class of predicates of the multiset of final input values are stably computable in any weakly-connected network. We also show that nondeterminism in the transition function does not increase the class of stably computable predicates.
@inproceedings(AngluinACFJP2005, title="Stably computable properties of network graphs", author="Dana Angluin and James Aspnes and Melody Chan and Michael J. Fischer and Hong Jiang and Ren\'e Peralta", booktitle="Distributed Computing in Sensor Systems: First IEEE International Conference, DCOSS 2005, Marina del Rey, CA, USE, June/July, 2005, Proceedings", editor="Viktor K. Prasanna and Sitharama Iyengar and Paul Spirakis and Matt Welsh", series="Lecture Notes in Computer Science", publisher="Springer-Verlag", pages = {63--74}, volume="3560", month=jun, year=2005 )
James Aspnes, Kevin Chang, and Aleksandr Yampolskiy. Inoculation strategies for victims of viruses and the sum-of-squares partition problem. Journal of Computer and System Sciences, 72(6):1077–1093, September 2006. An ealier version appeared in Sixteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2005, pp. 43–52. Available as YALEU/DCS/TR-1295, July 2004.
We propose a simple game for modeling containment of the spread of viruses in a graph of n nodes. Each node must choose to either install anti-virus software at some known cost C, or risk infection and a loss L if a virus that starts at a random initial point in the graph can reach it without being stopped by some intermediate node. The goal of individual nodes is to minimize their individual expected cost. We prove many game theoretic properties of the model, including an easily-applied characterization of Nash equilibria, culminating in our showing that allowing selfish users to choose Nash equilibrium strategies is highly undesirable, because the price of anarchy is an unacceptable Θ(n) in the worst case. This shows in particular that a centralized solution can give a much better total cost than an equilibrium solution. Though it is NP-hard to compute such a social optimum, we show that the problem can be reduced to a previously unconsidered combinatorial problem that we call the sum-of-squares partition problem. Using a greedy algorithm based on sparse cuts, we show that this problem can be approximated to within a factor of O(log²n), giving the same approximation ratio for the inoculation game.
@article(AspnesCY2006, title="Inoculation strategies for victims of viruses and the sum-of-squares partition problem", author="James Aspnes and Kevin Chang and Aleksandr Yampolskiy", journal=jcss, volume=72, number=6, pages={1077--1093}, month=sep, year=2006, )
James Aspnes, Faith Ellen Fich, and Eric Ruppert. Relationships between broadcast and shared memory in reliable anonymous distributed systems. Distributed Computing 18(3):209–219, February 2006. (DISC 2004 special issue.) An earlier version appeared in Proceedings of the Eighteenth International Symposium on Distributed Computing (DISC 2004), October 2004, pp. 260–274.
We study the power of reliable anonymous distributed systems, where processes do not fail, do not have identifiers, and run identical programmes. We are interested specifically in the relative powers of systems with different communications mechanisms: anonymous broadcast, read-write registers, or registers supplemented with additional shared-memory objects. We show that a system with anonymous broadcast can simulate a system of shared-memory objects if and only if the objects satisfy a property we call idemdicence; this result holds regardless of whether either system is synchronous or asynchronous. Conversely, the key to simulating anonymous broadcast in anonymous shared memory is the ability to count: broadcast can be simulated by an asynchronous shared-memory system that uses only counters, but registers by themselves are not enough. We further examine the relative power of different types and sizes of bounded counters and conclude with a non-robustness result.
@article(AspnesFR2006, title="Relationships between broadcast and shared memory in reliable anonymous distributed systems", author="James Aspnes and Faith Ellen Fich and Eric Ruppert", journal={Distributed Computing}, volume=18, number=3, month=feb, year=2006, pages={209--219} )
James Aspnes, Joan Feigenbaum, Aleksandr Yampolskiy, and Sheng Zhong. Towards a theory of data entanglement. Theoretical Computer Science 389(1–2):26–43, December 2007. An earlier version appeared in Ninth European Symposium on Research in Computer Security, Lecture Notes in Computer Science 3192, Springer-Verlag, September 2004, pp. 177–192.
We give a formal model for systems that store data in entangled form. We propose a new notion of entanglement, called all-or-nothing integrity (AONI) that binds the users' data in a way that makes it hard to corrupt the data of any one user without corrupting the data of all users. AONI can be a useful defense against negligent or dishonest storage providers who might otherwise be tempted to discard documents belonging to users without much clout. We show that, if all users use a fixed standard recovery algorithm, we can implement AONI using a MAC, but, if some of the users adopt instead a non-standard recovery algorithm provided by the dishonest storage provider, AONI can no longer be achieved. However, even for the latter scenario, we describe a simple entangling mechanism that provides AONI for a restricted class of destructive adversaries.
@article(AspnesFYZ2007, title="Towards a theory of data entanglement", author="James Aspnes and Joan Feigenbaum and Aleksandr Yampolskiy and Sheng Zhong", journal="Theoretical Computer Science", volume=389, number={1--2}, month=dec, year=2007, )
Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing 18(4):235–253, March 2006. (PODC 2004 special issue.) An earlier version appeared in Twenty-Third Annual ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, July 2004, pp. 290–299.
We explore the computational power of networks of small resource-limited mobile agents. We define two new models of computation based on pairwise interactions of finite-state agents in populations of finite but unbounded size. With a fairness condition on interactions, we define the concept of eventual computation of a function or predicate, and give protocols that eventually compute functions in a class including Boolean combinations of threshold-k, parity, majority, and simple arithmetic. We prove that all eventually computable predicates are in NL. With uniform random sampling of pairs to interact, we define the model of conjugating automata and show that any counter machine with O(1) counters of capacity O(n) can be simulated with high probability by a protocol in a population of size n. We prove that all predicates computable with high probability in this model are in P intersect RL. Several open problems and promising future directions are discussed.
@article(AngluinADFP2006, title="Computation in networks of passively mobile finite-state sensors", author="Dana Angluin and James Aspnes and Zo{\"e} Diamadi and Michael J. Fischer and Ren\'e Peralta", journal="Distributed Computing", month=mar, year=2006, pages={235--253} )
James Aspnes, Jonathan Kirsch, and Arvind Krishnamurthy. Load balancing and locality in range-queriable data structures. Twenty-Third ACM Symposium on Principles of Distributed Computing, July 2004, pp. 115–124.
We describe a load-balancing mechanism for assigning elements to servers in a distributed data structure that supports range queries. The mechanism ensures both load-balancing with respect to an arbitrary load measure specified by the user and geographical locality, assigning elements with similar keys to the same server. Though our mechanism is specifically designed to improve the performance of skip graphs, it can be adapted to provide deterministic, locality-preserving load-balancing to any distributed data structure that orders machines in a ring or line.
@inproceedings(AspnesKK2004, title="Load balancing and locality in range-queriable data structures", author="James Aspnes and Jonathan Kirsch and Arvind Krishnamurthy", booktitle="Twenty-Third ACM Symposium on Principles of Distributed Computing", month=jul, year=2004, pages={115--124} )
James Aspnes, David Goldenberg, and Yang Richard Yang. On the computational complexity of sensor network localization. Algorithmic Aspects of Wireless Sensor Networks: First International Workshop, ALGOSENSORS 2004, Turku, Finland, July 16, 2004. Proceedings. Lecture Notes in Computer Science 3121, Springer-Verlag, 2004, pp. 32–44. Available as YALEU/DCS/TR-1282, April 2004.
Determining the positions of the sensor nodes in a network is essential to many network functionalities such as routing, coverage and tracking, and event detection. The localization problem for sensor networks is to reconstruct the positions of all of the sensors in a network, given the distances between all pairs of sensors that are within some radius r of each other. In the past few years, many algorithms for solving the localization problem were proposed, without knowing the computational complexity of the problem. In this paper, we show that no polynomial-time algorithm can solve this problem in the worst case, even for sets of distance pairs for which a unique solution exists, unless RP=NP. We also discuss the consequences of our result and present open problems.
@inproceedings(AspnesGY2004, title="On the computational complexity of sensor network localization", author="James Aspnes and David Goldenberg and Yang Richard Yang", booktitle="Algorithmic Aspects of Wireless Sensor Networks: First International Workshop, ALGOSENSORS 2004, Turku, Finland, July 16, 2004. Proceedings.", series={Lecture Notes in Computer Science}, volume=3121, publisher="Springer-Verlag", year=2004, pages={32--44} )
Dana Angluin, James Aspnes, Zoë Diamadi, Michael J. Fischer, and René Peralta. Urn automata. Available as YALEU/DCS/TR-1280, November 2003.
Urn automata are a new class of automata consisting of an input tape, a finite-state controller, and an urn containing tokens with a finite set of colors, where the finite-state controller can sample and replace tokens in the urn but cannot control which tokens it receives. We consider the computational power of urn automata, showing that an urn automaton with O(f(n)) tokens can, with high probability, simulate a probabilistic Turing machine using O(log f(n)) space and vice versa, as well as giving several technical results showing that the computational power of urn automata is not affected by variations in parameters such as the size of the state space, the number of tokens sampled per step, and so forth. Motivated by problems in distributed computing, we consider a special class of urn automata called pairing automata that model systems of finite-state machines that interact through random pairwise encounters. We show that pairing automata recognize precisely the symmetric languages recognized by urn automata.
@techreport(AngluinADFP2003, title="Urn automata", author="Dana Angluin and James Aspnes and Zo{\"e} Diamadi and Michael J. Fischer and Ren\'e Peralta", institution="Yale University Department of Computer Science", number="YALEU/DCS/TR-1280", month=nov, year=2003 )
James Aspnes. Randomized protocols for asynchronous consensus. Invited survey paper for Distributed Computing PODC 20th anniversary issue, 16:(2–3):165–175, September 2003. Available as arXiv:cs.DS/0209014.
The famous Fischer, Lynch, and Paterson impossibility proof shows that it is impossible to solve the consensus problem in a natural model of an asynchronous distributed system if even a single process can fail. Since its publication, two decades of work on fault-tolerant asynchronous consensus algorithms have evaded this impossibility result by using extended models that provide (a) randomization, (b) additional timing assumptions, (c) failure detectors, or (d) stronger synchronization mechanisms than are available in the basic model. Concentrating on the first of these approaches, we illustrate the history and structure of randomized asynchronous consensus protocols by giving detailed descriptions of several such protocols.
@article(Aspnes2003, title="Randomized protocols for asynchronous consensus", author="James Aspnes", journal="Distributed Computing", volume=16, number={2--3}, pages={165--175}, month=sep, year=2003 )
James Aspnes, Joan Feigenbaum, Michael Mitzenmacher, and David C. Parkes. Towards better definitions and measures of Internet security. Workshop on Large-Scale-Network Security and Deployment Obstacles, Landsdowne VA, March 2003.
The conventional wisdom is that “the Internet is very insecure.” The subtitle of this workshop, namely “deployment obstacles,” implies that network owners, operators, and users could have solved pervasive security problems if they had deployed existing security technology. Is there solid evidence that either of these statements is true?
@inproceedings(AspnesFMP2003, title="Towards better definitions and measures of Internet security", author="James Aspnes and Joan Feigenbaum and Michael Mitzenmacher and David C. Parkes", booktitle="Workshop on Large-Scale-Network Security and Deployment Obstacles", address="Landsdowne, VA", month=mar, year=2003 )
James Aspnes and Gauri Shah. Skip graphs. ACM Transactions on Algorithms, 3(4):37, November 2007. An earlier version appeared in Fourteenth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2003, pp. 384–393. Available as arXiv:cs.DS/0306043.
Skip graphs are a novel distributed data structure, based on skip lists, that provide the full functionality of a balanced tree in a distributed system where resources are stored in separate nodes that may fail at any time. They are designed for use in searching peer-to-peer systems, and by providing the ability to perform queries based on key ordering, they improve on existing search tools that provide only hash table functionality. Unlike skip lists or other tree data structures, skip graphs are highly resilient, tolerating a large fraction of failed nodes without losing connectivity. In addition, constructing, inserting new nodes into, searching a skip graph, and detecting and repairing errors in the data structure introduced by node failures can be done using simple and straightforward algorithms.
@article(AspnesS2007, title="Skip graphs", author="James Aspnes and Gauri Shah", journal="ACM Transactions on Algorithms", month=nov, year=2007, volume=3, number=4, pages={37} )
James Aspnes, Zoë Diamadi, and Gauri Shah. Fault-tolerant routing in peer-to-peer systems. Twenty-First ACM Symposium on Principles of Distributed Computing, July 2002, pp. 223–232. Available as arXiv:cs.DS/0302022.
We consider the problem of designing an overlay network and routing mechanism that permits finding resources efficiently in a peer-to-peer system. We argue that many existing approaches to this problem can be modeled as the construction of a random graph embedded in a metric space whose points represent resource identifiers, where the probability of a connection between two nodes depends only on the distance between them in the metric space. We study the performance of a peer-to-peer system where nodes are embedded at grid points in a simple metric space: a one-dimensional real line. We prove upper and lower bounds on the message complexity of locating particular resources in such a system, under a variety of assumptions about failures of either nodes or the connections between them. Our lower bounds in particular show that the use of inverse power-law distributions in routing, as suggested by Kleinberg (1999), is close to optimal. We also give heuristics to efficiently maintain a network supporting efficient routing as nodes enter and leave the system. Finally, we give some experimental results that suggest promising directions for future work.
@inproceedings(AspnesDS2002, title="Fault-tolerant routing in peer-to-peer systems", author="James Aspnes and Zo{\"e} Diamadi and Gauri Shah", booktitle="Twenty-First ACM Symposium on Principles of Distributed Computing", month=jul, year=2002, pages={223--232} )
James Aspnes, Gauri Shah, and Jatin Shah. Wait-free consensus with infinite arrivals. Thirty-Fourth Annual ACM Symposium on Theory of Computing, May 2002, pp. 524–533.
A randomized algorithm is given that solves the wait-free consensus problem for a shared-memory model with infinitely many processes. The algorithm is based on a weak shared coin algorithm that uses weighted voting to achieve a majority outcome with at least constant probability that cannot be disguised even if a strong adversary is allowed to destroy infinitely many votes. The number of operations performed by process i is a polynomial function of i. Additional algorithms are given for solving consensus more efficiently in models with an unknown upper bound b on concurrency or an unknown upper bound n on the number of active processes; under either of these restrictions, it is also shown that the problem can be solved even with infinitely many anonymous processes by prefixing each instance of the shared coin with a naming algorithm that breaks symmetry with high probability. For many of these algorithms, matching lower bounds are proved that show that their per-process work is nearly optimal as a function of i, b, or n. The case of n active processes gives an algorithm for anonymous, adaptive consensus that requires only O(n log² n) per-process work, which is within a constant factor of the best previously known non-adaptive algorithm for a strong adversary. Finally, it is shown that standard universal constructions based on consensus continue to work with infinitely many processes with only slight modifications. This shows that in infinite distributed systems, as in finite ones, with randomness all things are possible.
@inproceedings(AspnesSS2002, title="Wait-free consensus with infinite arrivals", author="James Aspnes and Gauri Shah and Jatin Shah", booktitle="Thirty-Fourth Annual ACM Symposium on Theory of Computing", month=may, year=2002, pages={524--533} )
Karhan Akcoglu, James Aspnes, Bhaskar DasGupta, and Ming-Yang Kao. Opportunity-cost algorithms for combinatorial auctions. In E. J. Kontoghiorghes, B. Rustem, and S. Siokos, eds., Applied Optimization 74: Computational Methods in Decision-Making, Economics and Finance, Kluwer Academic Publishers, 2002, pp. 455–479. An earlier version is available as DIMACS technical report 2000-27 and as arXiv:cs.CE/0010031.
Two general algorithms based on opportunity costs are given for approximating a revenue-maximizing set of bids an auctioneer should accept, in a combinatorial auction in which each bidder offers a price for some subset of the available goods and the auctioneer can only accept non-intersecting bids. Since this problem is difficult even to approximate in general, the algorithms are most useful when the bids are restricted to be connected node subsets of an underlying object graph that represents which objects are relevant to each other. The approximation ratios of the algorithms depend on structural properties of this graph and are small constants for many interesting families of object graphs. The running times of the algorithms are linear in the size of the bid graph, which describes the conflicts between bids. Extensions of the algorithms allow for efficient processing of additional constraints, such as budget constraints that associate bids with particular bidders and limit how many bids from a particular bidder can be accepted.
@incollection(AkcogluADK2002, title="Opportunity-cost algorithms for combinatorial auctions", author="Karhan Akcoglu and James Aspnes and Bhaskar DasGupta and Ming-Yang Kao", booktitle="Applied Optimization 74: Computational Methods in Decision-Making, Economics and Finance", editor="E. J. Kontoghiorghes and B. Rustem and S. Siokos", publisher="Kluwer Academic Publishers", year=2002, pages={455--479} )
James Aspnes, Julia Hartling, Ming-Yang Kao, Junhyong Kim, and Gauri Shah. A combinatorial toolbox for protein sequence design and landscape analysis in the Grand Canonical model. Journal of Computational Biology, 9(5):721–741, October, 2002. An earlier version appeared in Twelfth Annual International Symposium on Algorithms and Computation, December 2001, pp. 403–415. Available as arXiv:cs.CE/0101015.
In modern biology, one of the most important research problems is to understand how protein sequences fold into their native 3D structures. To investigate this problem at a high level, one wishes to analyze the protein landscapes, i.e., the structures of the space of all protein sequences and their native 3D structures. Perhaps the most basic computational problem at this level is to take a target 3D structure as input and design a fittest protein sequence with respect to one or more fitness functions of the target 3D structure. We develop a toolbox of combinatorial techniques for protein landscape analysis in the Grand Canonical model of Sun, Brem, Chan, and Dill. The toolbox is based on linear programming, network flow, and a linear-size representation of all minimum cuts of a network. It not only substantially expands the network flow technique for protein sequence design in Kleinberg's seminal work but also is applicable to a considerably broader collection of computational problems than those considered by Kleinberg. We have used this toolbox to obtain a number of efficient algorithms and hardness results. We have further used the algorithms to analyze 3D structures drawn from the Protein Data Bank and have discovered some novel relationships between such native 3D structures and the Grand Canonical model.
@article(AspnesHKKS2002, title="A combinatorial toolbox for protein sequence design and landscape analysis in the {G}rand {C}anonical model", author="James Aspnes and Julia Hartling and Ming-Yang Kao and Junhyong Kim and Gauri Shah", journal="Journal of Computational Biology", volume=9, number=5, pages={721--741}, month=oct, year=2002 )
James Aspnes, David F. Fischer, Michael J. Fischer, Ming-Yang Kao, and Alok Kumar. Towards understanding the predictability of stock markets from the perspective of computational complexity. Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms, January 2001, pp. 745–754. Available as arXiv:cs.CE/0010021.
This paper initiates a study into the century-old issue of market predictability from the perspective of computational complexity. We develop a simple agent-based model for a stock market where the agents are traders equipped with simple trading strategies, and their trades together determine the stock prices. Computer simulations show that a basic case of this model is already capable of generating price graphs which are visually similar to the recent price movements of high tech stocks. In the general model, we prove that if there are a large number of traders but they employ a relatively small number of strategies, then there is a polynomial-time algorithm for predicting future price movements with high accuracy. On the other hand, if the number of strategies is large, market prediction becomes complete in two new computational complexity classes CPP and promise-BCPP, where P^{NP[O(log N)]} <= promise-BCPP <= CPP = PP. These computational completeness results open up a novel possibility that the price graph of a actual stock could be sufficiently deterministic for various prediction goals but appear random to all polynomial-time prediction algorithms.
@inproceedings(AspnesFFKK2001, title="Towards understanding the predictability of stock markets from the perspective of computational complexity", author="James Aspnes and David F. Fischer and Michael J. Fischer and Ming-Yang Kao and Alok Kumar", booktitle="Twelfth Annual ACM-SIAM Symposium on Discrete Algorithms", month=jan, year=2001, pages={745--754} )
James Aspnes. Fast deterministic consensus in a noisy environment. Journal of Algorithms, 45(1):16–39, October, 2002. An earlier version appeared in Nineteenth Annual ACM Symposium on Principles of Distributed Computing, July 2000, pp. 299–309. Available as arXiv:cs.DS/0206012.
It is well known that the consensus problem cannot be solved deterministically in an asynchronous environment, but that randomized solutions are possible. We propose a new model, called noisy scheduling, in which an adversarial schedule is perturbed randomly, and show that in this model randomness in the environment can substitute for randomness in the algorithm. In particular, we show that a simplified, deterministic version of Chandra's wait-free shared-memory consensus algorithm solves consensus in time at most logarithmic in the number of active processes. The proof of termination is based on showing that a race between independent delayed renewal processes produces a winner quickly. In addition, we show that the protocol finishes in constant time using quantum and priority-based scheduling on a uniprocessor, suggesting that it is robust against the choice of model over a wide range.
@article(Aspnes2002, title="Fast deterministic consensus in a noisy environment", author="James Aspnes", journal={Journal of Algorithms}, volume=45, number=1, pages={16--39}, month=oct, year=2002 )
James Aspnes. Lower bounds for distributed coin-flipping and randomized consensus. Journal of the Association for Computing Machinery 45(3):415–450, May 1998. An earlier version appeared in Twenty-Ninth Annual ACM Symposium on Theory of Computing, May 1997, pp. 559–568.
We examine a class of collective coin-flipping games that arises from randomized distributed algorithms with halting failures. In these games, a sequence of local coin flips is generated, which must be combined to form a single global coin flip. An adversary monitors the game and may attempt to bias its outcome by hiding the result of up to t local coin flips. We show that to guarantee at most constant bias, Ω(t²) local coins are needed, even if (a) the local coins can have arbitrary distributions and ranges, (b) the adversary is required to decide immediately whether to hide or reveal each local coin, and (c) the game can detect which local coins have been hidden. If the adversary is permitted to control the outcome of the coin except for cases whose probability is polynomial in t, Ω(t²/log² t) local coins are needed. Combining this fact with an extended version of the well-known Fischer-Lynch-Paterson impossibility proof of deterministic consensus, we show that given an adaptive adversary, any t-resilient asynchronous consensus protocol requires Ω(t²/log² t) local coin flips in any model that can be simulated deterministically using atomic registers. This gives the first non-trivial lower bound on the total work required by wait-free consensus and is tight to within logarithmic factors.
@article(Aspnes1998coin, title="Lower bounds for distributed coin-flipping and randomized consensus", author="James Aspnes", journal=jacm, volume=45, number=3, pages={415--450}, month=may, year=1998 )
James Aspnes. Competitive analysis of distributed algorithms. Invited survey paper, Dagstuhl-Seminar on On-line Algorithms, Schloss Dagstuhl, June 23–28, 1996. In Amos Fiat, Gerhard Woeginger, eds., Online Algorithms: The State of the Art, Lecture Notes in Computer Science 1442, Springer-Verlag, 1998, pp. 118–146.
Most applications of competitive analysis have involved on-line problems where a candidate on-line algorithm must compete on some input sequence against an optimal off-line algorithm that can in effect predict future inputs. Efforts to apply competitive analysis to fault-tolerant distributed algorithms require accounting for not only this input nondeterminism but also system nondeterminism that arises in distributed systems prone to asynchrony and failures. This paper surveys recent efforts to adapt competitive analysis to distributed systems, and suggests how these adaptations might in turn be useful in analyzing a wider variety of systems. These include tools for building competitive algorithms by composition, and for obtaining more meaningful competitive ratios by limiting the knowledge of the off-line algorithm.
Full version: PDF.
@incollection(Aspnes1998competitive, title="Competitive analysis of distributed algorithms", author="James Aspnes", booktitle="Online Algorithms: The State of the Art", editor="Amos Fiat and Gerhard Woeginger", series={Lecture Notes in Computer Science}, volume=1442, publisher="Springer-Verlag", year=1998, pages={118--146} )
James Aspnes and Orli Waarts. Compositional competitiveness for distributed algorithms. Journal of Algorithms 54(2):127–151, February 2005. Available as arXiv:cs.DS/0306044. An earlier version appeared in Twenty-Eighth Annual ACM Symposium on Theory of Computing, May 1996, pp. 237–246, under the title “Modular competitiveness for distributed algorithms.” A brief announcement of this work appeared in Fourteenth Annual ACM Symposium on Principles of Distributed Computing, August 1995, p. 252, under the title “A modular measure of competitive performance for distributed algorithms.”
We define a measure of competitive performance for distributed algorithms based on throughput, the number of tasks that an algorithm can carry out in a fixed amount of work. This new measure complements the latency measure of Ajtai et al., which measures how quickly an algorithm can finish tasks that start at specified times. The novel feature of the throughput measure, which distinguishes it from the latency measure, is that it is compositional: it supports a notion of algorithms that are competitive relative to a class of subroutines, with the property that an algorithm that is k-competitive relative to a class of subroutines, combined with an l-competitive member of that class, gives a combined algorithm that is kl-competitive.
In particular, we prove the throughput-competitiveness of a class of algorithms for collect operations, in which each of a group of n processes obtains all values stored in an array of n registers. Collects are a fundamental building block of a wide variety of shared-memory distributed algorithms, and we show that several such algorithms are competitive relative to collects. Inserting a competitive collect in these algorithms gives the first examples of competitive distributed algorithms obtained by composition using a general construction.
@article(AspnesW2005compositional, title="Compositional competitiveness for distributed algorithms", author="James Aspnes and Orli Waarts", journal={Journal of Algorithms}, volume=54, number=2, pages={127--151}, month=feb, year=2005 )
James Aspnes and William Hurwood. Spreading rumors rapidly despite an adversary. Journal of Algorithms 26(2):386–411, February 1998. An earlier version appeared in Fifteenth Annual ACM Symposium on Principles of Distributed Computing, May 1996, pp. 143–151.
In the collect problem, n processors in a shared-memory system must each learn the values of n registers. We give a randomized algorithm that solves the collect problem in O(n log³ n) total read and write operations with high probability, even if timing is under the control of a content-oblivious adversary (a slight weakening of the usual adaptive adversary). This improves on both the trivial upper bound of O(n²) steps and the best previously known bound of O(n^{3/2} log n) steps, and is close to the lower bound of Ω(n log n) steps. Furthermore, we show how this algorithm can be used to obtain a multi-use cooperative collect protocol that is O(log³ n)-competitive in the latency model of Ajtai et al and O(n^{1/2}) log^{3/2} n)-competitive in the throughput model of Aspnes and Waarts; in both cases we show that the competitive ratios are within a polylogarithmic factor of optimal.
@article(AspnesH1998, title="Spreading rumors rapidly despite an adversary", author="James Aspnes and William Hurwood", journal={Journal of Algorithms}, volume=26, number=2, pages={386--411}, month=feb, year=1998 )
Miklos Ajtai, James Aspnes, Moni Naor, Yuval Rabani, Leonard J. Schulman, and Orli Waarts. Fairness in scheduling. Journal of Algorithms 29(2):306–357, November, 1998. (SODA 1995 special issue.) An earlier version appeared in Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, January 1995, pp. 477–485.
On-line machine scheduling has been studied extensively, but the fundamental issue of fairness in scheduling is still mostly open. In this paper we explore the issue in settings where there are long living processes which should be repeatedly scheduled for various tasks throughout the lifetime of a system. For any such instance we develop a notion of desired load of a process, which is a function of the tasks it participates in. The unfairness of a system is the maximum, taken over all processes, of the difference between the desired load and the actual load.
An example of such a setting is the carpool problem suggested by Fagin and Williams. In this problem, a set of n people form a carpool. On each day a subset of the people arrive and one of them is designated as the driver. A scheduling rule is required so that the driver will be determined in a “fair” way.
We investigate this problem under various assumptions on the input distribution. We also show that the carpool problems can capture several other problems of fairness in scheduling.
@article(AjtaiANRSW1998, title="Fairness in scheduling", author="Miklos Ajtai and James Aspnes and Moni Naor and Yuval Rabani and Leonard J. Schulman and Orli Waarts", journal={Journal of Algorithms}, volume=29, number=2, pages={306--357}, month=nov, year=1998 )
Miklos Ajtai, James Aspnes, Cynthia Dwork, and Orli Waarts. A theory of competitive analysis for distributed algorithms. Thirty-Fifth IEEE Symposium on Foundations of Computer Science, November 1994, pp. 401–411. A brief announcement of this work appeared in Thirteenth ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, August 1994, under the title “Competitive analysis for distributed algorithms.”
We introduce a theory of competitive analysis for distributed algorithms. The first steps in this direction were made in the seminal papers of Bartal, Fiat, and Rabani, and of Awerbuch, Kutten, and Peleg, in the context of data management and job scheduling. In these papers, as well as in other subsequent work, the cost of a distributed algorithm is compared to the cost of an optimal global-control algorithm. Here we introduce a more refined notion of competitiveness for distributed algorithms, one that reflects the performance of distributed algorithms more accurately. In particular, our theory allows one to compare the cost of a distributed on-line algorithm to the cost of an optimal distributed algorithm.
We demonstrate our method by studying the cooperative collect primitive, first abstracted by Saks, Shavit, and Woll. We present two algorithms (with different strengths) for this primitive, and provide a competitive analysis for each one.
@inproceedings(AjtaiADW1994, title="A theory of competitive analysis for distributed algorithms", author="Miklos Ajtai and James Aspnes and Cynthia Dwork and Orli Waarts", booktitle="Thirty-Fifth IEEE Symposium on Foundations of Computer Science", month=nov, year=1994, pages={401--411} )
James Aspnes, Yossi Azar, Amos Fiat, Serge Plotkin, and Orli Waarts. On-line routing of virtual circuits with applications to load balancing and machine scheduling. Journal of the Association for Computing Machinery 44(3):486–504, May 1997. An earlier version appeared in Twenty-Fifth Annual ACM Symposium on Theory of Computing, May 1993, pp. 623–631, under the title “On-line load balancing with applications to machine scheduling and virtual circuit routing.”
In this paper we study the problem of on-line allocation of routes to virtual circuits (both point-to-point and multicast) where the goal is to route all requests while minimizing the required bandwidth. We concentrate on the case of permanent virtual circuits (i.e., once a circuit is established, it exists forever), and describe an algorithm that achieves an O(log n) competitive ratio with respect to maximum congestion, where n is the number of nodes in the network. Informally, our results show that instead of knowing all of the future requests, it is sufficient to increase the bandwidth of the communication links by an O(log n) factor. We also show that this result is tight, i.e. for any on-line algorithm there exists a scenario in which Ω(log n) increase in bandwidth is necessary in directed networks.
We view virtual circuit routing as a generalization of an on-line load balancing problem, defined as follows: jobs arrive on line and each job must be assigned to one of the machines immediately upon arrival. Assigning a job to a machine increases this machine's load by an amount that depends both on the job and on the machine. The goal is to minimize the maximum load.
For the related machines case, we describe the first algorithm that achieves constant competitive ratio. For the unrelated case (with n machines), we describe a new method that yields O(log n)-competitive algorithm. This stands in contrast to the natural greedy approach, whose competitive ratio is exactly n.
@article(AspnesAFPW1993, title="On-line routing of virtual circuits with applications to load balancing and machine scheduling", author="James Aspnes and Yossi Azar and Amos Fiat and Serge Plotkin and Orli Waarts", journal=jacm, volume=44, number=3, pages={486--504}, month=may, year=1997 )
James Aspnes and Orli Waarts. Randomized consensus in expected O(n log² n) operations per processor. SIAM Journal on Computing 25(5):1024–1044, October 1996. An earlier version appeared in Thirty-Third IEEE Symposium on Foundations of Computer Science, October 1992, pp. 137–146.
This paper presents a new randomized algorithm for achieving consensus among asynchronous processors that communicate by reading and writing shared registers. The fastest previously known algorithm requires a processor to perform an expected O(n² log n) read and write operations in the worst case. In our algorithm, each processor executes at most an expected O(n log² n) read and write operations, which is close to the trivial lower bound of Ω(n).
All previously known polynomial-time consensus algorithms were structured around a shared coin protocol in which each processor repeatedly adds random ±1 votes to a common pool. Consequently, in all of these protocols, the worst case expected bound on the number of read and write operations done by a single processor is asymptotically no better than the bound on the total number of read and write operations done by all of the processors together. We succeed in breaking this tradition by allowing the processors to cast votes of increasing weights. This grants the adversary greater control since he can choose from up to n different weights (one for each processor) when determining the weight of the next vote to be cast. We prove that our shared coin protocol is correct nevertheless using martingale arguments.
@article(AspnesW1996, title="Randomized consensus in expected {$O(n \log^2 n)$} operations per processor", author="James Aspnes and Orli Waarts", journal=sicomp, volume=25, number=5, pages={1024--1044}, month=oct, year=1996 )
James Aspnes. Wait-Free Consensus. PhD thesis, Carnegie-Mellon University, 1992. Available as CMU-CS-TR-92-164.
Consensus is a decision problem in which n processors, each starting with a value not known to the others, must collectively agree on a single value. If the initial values are equal, the processors must agree on that common value; this is the validity condition. A consensus protocol is wait-free if every processor finishes in a finite number of its own steps regardless of the relative speeds of the other processors, a condition that precludes the use of traditional synchronization techniques such as critical sections, locking, or leader election. Wait-free consensus is fundamental to synchronization without mutual exclusion, as it can be used to construct wait-free implementations of arbitrary concurrent data structures. It is known that no deterministic algorithm for wait-free consensus is possible, although many randomized algorithms have been proposed.
I present two algorithms for solving the wait-free consensus problem in the standard asynchronous shared-memory model. The first is a very simple protocol based on a random walk. The second is a protocol based on weighted voting, in which each processor executes O(n log² n) expected operations. This bound is close to the trivial lower bound of Ω(n), and it substantially improves on the best previously-known bound of O(n² log n), due to Bracha and Rachman.
@phdthesis(Aspnes1992, title="Wait-Free Consensus", author="James Aspnes", school="Carnegie-Mellon University", year=1992 )
James Aspnes, Maurice Herlihy, and Nir Shavit. Counting networks. Journal of the Association for Computing Machinery 41(5):1020–1048, September 1994. An earlier version appeared in Twenty-Third Annual ACM Symposium on Theory of Computing, May 1991, pp. 348–358, under the title “Counting networks and multiprocessor coordination.”
Many fundamental multi-processor coordination problems can be expressed as counting problems: processes must cooperate to assign successive values from a given range, such as addresses in memory or destinations on an interconnection network. Conventional solutions to these problems perform poorly because of synchronization bottlenecks and high memory contention.
Motivated by observations on the behavior of sorting networks, we offer a new approach to solving such problems, by introducing counting networks, a new class of networks that can be used to count. We give two counting network constructions, one of depth log n (1 + log n) / 2 using n log n (1 + log n) / 4 “gates” and a second of depth log² n, using n log² n / 2 gates. These networks avoid the sequential bottlenecks inherent to earlier solutions, and substantially lower the memory contention.
Finally, to show that counting networks are not merely mathematical creatures, we provide experimental evidence that they outperform conventional synchronization techniques under a variety of circumstances.
@article(AspnesHS1994, title="Counting networks", author="James Aspnes and Maurice Herlihy and Nir Shavit", journal=jacm, volume=41, number=5, pages={1020--1048}, month=sep, year=1994 )
James Aspnes, Richard Beigel, Merrick Furst, and Steven Rudich. The expressive power of voting polynomials. Combinatorica 14(2):1–14, 1994. An earlier version appeared in Twenty-Third Annual ACM Symposium on Theory of Computing, May 1991, pp. 402–409.
We consider the problem of approximating a Boolean function f: {0,1}^{n} → {0,1} by the sign of an integer polynomial p of degree k. For us, a polynomial p(x) predicts the value of f(x) if, whenever p(x) ≥ 0, f(x) = 1, and whenever p(x)<0, f(x) = 0. A low-degree polynomial p is a good approximator for f if it predicts f at almost all points. Given a positive integer k, and a Boolean function f, we ask, “how good is the best degree k approximation to f?” We introduce a new lower bound technique which applies to any Boolean function. We show that the lower bound technique yields tight bounds in the case f is parity. Minsky and Papert proved that a perceptron can not compute parity; our bounds indicate exactly how well a perceptron can approximate it. As a consequence, we are able to give the first correct proof that, for a random oracle A, PP^{A} is properly contained in PSPACE^{A}. We are also able to prove the old AC0 exponential-size lower bounds in a new way. This allows us to prove the new result that an AC0 circuit with one majority gate cannot approximate parity. Our proof depends only on basic properties of integer polynomials.
@article(AspnesBFR1994, title="The expressive power of voting polynomials", author="James Aspnes and Richard Beigel and Merrick Furst and Steven Rudich", journal="Combinatorica", volume=14, number=2, pages={1--14}, year=1994 )
James Aspnes and Maurice Herlihy. Fast randomized consensus using shared memory. Journal of Algorithms 11(3):441–461, September 1990.
We give a new randomized algorithm for achieving consensus among asynchronous processes that communicate by reading and writing shared registers. The fastest previously known algorithm has exponential expected running time. Our algorithm is polynomial, requiring an expected O(n^{4}) operations. Applications of this algorithm include the elimination of critical sections from concurrent data structures and the construction of asymptotically unbiased shared coins.
@article(AspnesH1990consensus, title="Fast randomized consensus using shared memory", author="James Aspnes and Maurice Herlihy", journal={Journal of Algorithms}, volume=11, number=3, pages={441--461}, month=sep, year=1990 )
James Aspnes. Time- and space-efficient randomized consensus. Journal of Algorithms 14(3):414–431, May 1993. An earlier version appeared in Ninth ACM SIGACT-SIGOPS Symposium on Principles of Distributed Computing, August 1990, pp. 325–331.
A protocol is presented which solves the randomized consensus problem for shared memory. The protocol uses a total of O(p² + n) worst-case expected increment, decrement and read operations on a set of three shared O(log n)-bit counters, where p is the number of active processors and n is the total number of processors. It requires less space than previous polynomial-time consensus protocols, and is faster when not all of the processors participate in the protocol. A modified version of the protocol yields a weak shared coin whose bias is guaranteed to be in the range 1/2 ± epsilon regardless of scheduler behavior, and which is the first such protocol for the shared-memory model to guarantee that all processors agree on the outcome of the coin.
@article(Aspnes1993, title="Time- and space-efficient randomized consensus", author="James Aspnes", journal={Journal of Algorithms}, volume=14, number=3, pages={414--431}, month=may, year=1993 )
James Aspnes and Maurice Herlihy. Wait-free data structures in the asynchronous PRAM model. Second Annual ACM Symposium on Parallel Algorithms and Architectures, July 1990, pp. 340–349.
In the asynchronous PRAM model, processes communicate by atomically reading and writing shared memory locations. This paper investigates the extent to which asynchronous PRAM permits long-lived, highly concurrent data structures. An implementation of a concurrent object is wait-free if every operation will complete in a finite number of steps, and it is k-bounded wait-free, for some k > 0, if every operation will complete within k steps. In the first part of this paper, we show that there are objects with wait-free implementations but no k-bounded wait-free implementations for any k, and that there is an infinite hierarchy of objects with implementations that are k-bounded wait-free but not K-bounded wait-free for some K > k. In the second part of the paper, we give an algebraic characterization of a large class of objects that do have wait-free implementations in asynchronous PRAM, as well as a general algorithm for implementing them. Our tools include simple iterative algorithms for wait-free approximate agreement and atomic snapshot.
@inproceedings(AspnesH1990waitfree, title="Wait-free data structures in the asynchronous {PRAM} model", author="James Aspnes and Maurice Herlihy", booktitle="Second Annual ACM Symposium on Parallel Algorithms and Architectures", month=jul, year=1990, pages={340--349} )
James Aspnes, Alan Fekete, Nancy Lynch, Michael Merritt, and William Weihl. A theory of timestamp-based concurrency control for nested transactions. Fourteenth International Conference on Very Large Databases, August 1988, pp. 431–444.
We present a rigorous framework for analyzing timestamp-based concurrency control and recovery algorithms for nested transactions. We define a local correctness property, local static atomicity, that affords useful modularity. We show that local static atomicity of each object is sufficient to ensure global serializability. We present generalizations of algorithms due to Reed and Herlihy, and show that each ensures local static atomicity.
@inproceedings(AspnesFLMW1988, title="A theory of timestamp-based concurrency control for nested transactions", author="James Aspnes and Alan Fekete and Nancy Lynch and Michael Merritt and William Weihl", booktitle="Fourteenth International Conference on Very Large Databases", month=aug, year=1988, pages={431--444} )