Beberapa paradoks matematika dan logika bisa secara otomatis diterapkan pada komputer, tetapi apakah ada paradoks yang ditemukan dalam ilmu komputer itu sendiri?
Maksud saya, berlawanan dengan hasil intuitif yang terlihat seperti kontradiksi.
Beberapa paradoks matematika dan logika bisa secara otomatis diterapkan pada komputer, tetapi apakah ada paradoks yang ditemukan dalam ilmu komputer itu sendiri?
Maksud saya, berlawanan dengan hasil intuitif yang terlihat seperti kontradiksi.
Jawaban:
Saya menemukan fakta bahwa aliran jaringan adalah penghitung waktu polinomial intuitif. Tampaknya jauh lebih sulit pada tampilan pertama daripada banyak masalah NP-Hard. Atau dengan kata lain, ada banyak hasil di CS di mana waktu berjalan untuk menyelesaikannya jauh lebih baik daripada yang Anda harapkan.
implies is one example of this, and this came to my mind from both Ketan Mulmuley's GCT work as well as Ryan Williams' recent result that again used an upper bound for CIRCUIT-SAT to prove a lower bound for in terms of .
SAT has a polynomial-time algorithm only if P=NP. We don't know whether P=NP. However, I can write down an algorithm for SAT which is polynomial-time if P=NP is true. I don't know the correct reference for this, but the wikipedia page gives such an algorithm and credits Levin.
Computability certainly screws most students. A beautiful example with high confusion rate is this:
Is computable?
The answer is yes; see a discussion here. Most people immediately try constructing with present knowledge. That can not work and leads to a perceived paradox which is really just subtleness.
One surprising and counter intuitive result is that , proved using arithmetization around 1990.
As Arora & Barak put it (p. 157) "We know that interaction alone does not give us any languages outside NP. We also suspect that randomization alone does not add significant power to computation. So how much power could the combination of randomization and interaction provide?"
Apparently quite a bit!
How about Martin Escardo's publications showing that there are infinite sets that can be exhaustively searched over in finite time? See Escardo's guest blog post on Andrej Bauer's blog, for instance, on "Seemingly impossible functional programs".
The Recursion Theorem certainly seems counter-intuitive the first time you see it. Essentially it says that when you are describing a Turing Machine, you can assume it has access to its own description. In other words, I can build Turing Machines like:
TM M accepts n iff n is a multiple of the number of times "1" appears in the string representation of M.
TM N takes in a number n and outputs n copies of itself.
Note that the "string representation" here is not referring to the informal text description, but rather an encoding.
Proving information-theoretic results based on complexity-theoretic assumptions is another counter-intuitive result. For instance, Bellare et al. in their paper The (True) Complexity of Statistical Zero Knowledge constructively proved that, under the certified discrete log assumption, any language that admits honest-verifier statistical zero knowledge also admits statistical zero knowledge.
The result was so odd that it surprise the authors. They pointed out this fact several times; for instance, in the introduction:
Given that statistical zero-knowledge is a computationally independent notion, it is somewhat strange that properties about it could be proved under a computational intractability assumption.
PS: A stronger result was later proved unconditionally by Okamoto (On Relationships between Statistical Zero-Knowledge Proofs).
Since the above result includes a lot of cryptographic jargon, I try to informally define each term.
How about the fact that computing permanent is #P-Complete but computing determinant - a way weirder operation happens to be in the class NC?
This seems rather strange - it did not have to be that way (or maybe it did ;-) )
The linear programming problem is solvable in (weakly) polynomial time. This seems very surprising: why would we be able to find one among an exponential number of vertices of a high-dimensional polytope? Why would we be able to solve a problem which is so ridiculously expressive?
Not to mention all the exponential-size linear programs which we can solved by using the ellipsoid method and separation oracles, and other methods (adding variables, etc.). For example, it's amazing that an LP with an exponential number of variables such as the Karmakar-Karp relaxation of Bin Packing can be efficiently approximated.
Whenever I teach automata, I always ask my students if they find it surprising that nondeterminism doesn't add any power to finite-state automata (i.e., that for every NFA is there is an equivalent -- possibly much larger -- DFA). About half the class reports being surprised, so there you go. [I myself have lost the "feel" for what is surprising at the intro level.]
Students definitely find it surprising at first that . I challenge them to produce an algorithm that determines whether a given java program will halt, and they typically try to search for endless while loops. As soon as I show them ways of constructing loops whose termination is far from obvious, the surprise factor goes away.
I have found the A simple public-key cryptosystem with a double trapdoor decryption mechanism and its applications paradoxical, because it is a adaptive chosen ciphertext secure scheme which is homomorphic.