Fields Computer Science | Name Richard Lipton Organizations founded Damballa Role Computer scientist | |
Born September 6, 1946 (age 78) ( 1946-09-06 ) Institutions YaleBerkeleyPrincetonGeorgia Tech Thesis On Synchronization Primitive Systems (1973) Doctoral students Dan BonehAvi WigdersonChee Yap Similar People |
Richard lipton the story behind the result
Richard Jay Lipton (born September 6, 1946) is an American computer scientist who has worked in computer science theory, cryptography, and DNA computing. Lipton is Associate Dean of Research, Professor, and the Frederick G. Storey Chair in Computing in the College of Computing at the Georgia Institute of Technology.
Contents
- Richard lipton the story behind the result
- Richard lipton what would turing be doing today
- Career
- KarpLipton theorem
- Parallel algorithms
- Database security
- Online scheduling
- Program checking
- Games with simple strategies
- Query size estimation
- Formal verification of programs
- Multi party protocols
- Timespace SAT tradeoff
- Awards and honors
- References
Richard lipton what would turing be doing today
Career
In 1968, Lipton received his undergraduate degree in mathematics from Case Western Reserve University. In 1973, he received his Ph.D. from Carnegie Mellon University; his dissertation, supervised by David Parnas, is entitled On Synchronization Primitive Systems. After graduating, Lipton taught at Yale 1973–1978, at Berkeley 1978–1980, and then at Princeton 1980–2000. Since 2000, Lipton has been at Georgia Tech. While at Princeton, Lipton worked in the field of DNA computing. Since 1996, Lipton has been the chief consulting scientist at Telcordia.
Karp–Lipton theorem
In 1980, along with Richard M. Karp, Lipton proved that if SAT can be solved by Boolean circuits with a polynomial number of logic gates, then the polynomial hierarchy collapses to its second level.
Parallel algorithms
Showing that a program P has some property is a simple process if the actions inside the program are uninterruptible. However, when the action is interruptible, Lipton showed that through a type of reduction and analysis, it can be shown that the reduced program has that property if and only if the original program has the property. If the reduction is done by treating interruptible operations as one large uninterruptible action, even with these relaxed conditions properties can be proven for a program P. Thus, correctness proofs of a parallel system can often be greatly simplified.
Database security
Lipton studied and created database security models on how and when to restrict the queries made by users of a database such that private or secret information will not be leaked. Even when the user is restricted to only read operations on a database, secure information could be at risk. For example, querying a database of campaign donations could allow the user to discover the individual donations to political candidates or organizations. If given access to averages of data and unrestricted query access, a user could exploit the properties of those averages to gain illicit information. These queries are considered to have large "overlap" creating the insecurity. By bounding the "overlap" and number of queries, a secure database can be achieved.
Online scheduling
Richard Lipton with Andrew Tomkins introduced a randomized online interval scheduling algorithm, the 2-size version being strongly competitive, and the k-size version achieving O(log
Being presented with an event the user must decide whether or not to include the event in the schedule. The 2-size virtual algorithm is described by how it reacts to 1-interval or k-intervals being presented by the adversary:
Again, this 2-size algorithm is shown to be strongly-competitive. The generalized k-size algorithm which is similar to the 2-size algorithm is then shown to be O(log
Program checking
Lipton showed that randomized testing can be provably useful, given the problem satisfied certain properties. Proving correctness of a program is one of the most important problems presented in computer science. Typically in randomized testing, in order to attain a 1/1000 chance of an error, 1000 tests must be run. However Lipton shows that if a problem has "easy" sub-parts, repeated black-box testing can attain cr error rate, with c a constant less than 1 and r being the number of tests. Therefore, the probability of error goes to zero exponentially fast as r grows.
This technique is useful to check the correctness of many types of problems.
Games with simple strategies
In the area of game theory, more specifically on non-cooperative game, Lipton together with E.Markakis and A.Mehta proved the existence of epsilon-equilibrium strategies with support logarithmic in the number of pure strategy. Furthermore, the payoff of such strategies can epsilon-approximate the payoffs of exact Nash equilibrium. The limited size (logarithmic) of support provides a natural quasi-polynomial algorithm of computing an epsilon-equilibrium.
Query size estimation
Lipton and J.Naughton presented an adaptive random sampling algorithm for database querying which is applicable to any query for which answer to the query can be partitioned into disjoint subsets. Compared with most sampling estimation algorithms that statically determines the number of samples needed, the algorithm they proposed decides the number of samples based on the size of samples and tends to keep the running time constant rather than the number of samples.
Formal verification of programs
DeMillo, Lipton and Perlis criticized the idea of formal verification of programs and argued that
Multi-party protocols
Chandra, Furst and Lipton generalized the notion of two-party communication protocols to multi-party communication protocols. They proposed a model in which a collection of processes (
Time/space SAT tradeoff
We have no way to prove that Boolean satisfiability problem (often abbreviated as SAT), which is NP-complete, requires exponential (or at least super-polynomial) time (this is the famous P versus NP problem), or linear (or at least super-logarithmic) space to solve. However, in the context of space-time tradeoff, one can prove that SAT cannot be computed if we apply constraints to both time and space. L. Fortnow, Lipton, D. van Melkebeek, and A. Viglas proved that SAT cannot be computed by a Turing machine that takes at most O(n1.1) steps and at most O(n0.1) cells of its read-write tapes.