Theorems that impeded progress
$begingroup$
It may be that certain theorems, when proved true, counterintuitively retard
progress in certain domains. Lloyd Trefethen provides two examples:
- Faber's Theorem on polynomial interpolation: Interpreted as saying that polynomial interpolants are useless, but they are quite useful if the function is Lipschitz-continuous.
- Squire's Theorem on hydrodynamic instability: Applies in the limit $t to infty$ but (nondimensional) $t$ is rarely more than $100$.
Trefethen, Lloyd N. "Inverse Yogiisms." Notices of the American Mathematical Society 63, no. 11 (2016).
Also: The Best Writing on Mathematics 2017 6 (2017): 28.
Google books link.
In my own experience, I have witnessed the several negative-result theorems
proved in
Marvin Minsky and Seymour A. Papert.
Perceptrons: An Introduction to Computational Geometry, 1969.
MIT Press.
impede progress in neural-net research for more than a decade.1
Q. What are other examples of theorems whose (correct) proofs (possibly temporarily)
suppressed research advancement in mathematical subfields?
1
Olazaran, Mikel. "A sociological study of the official history of the perceptrons controversy." Social Studies of Science 26, no. 3 (1996): 611-659.
Abstract: "[...]I devote particular attention to the proofs and arguments of Minsky and Papert, which were interpreted as showing that further progress in neural nets was not possible, and that this approach to AI had to be abandoned.[...]"
RG link.
ho.history-overview big-picture
$endgroup$
|
show 8 more comments
$begingroup$
It may be that certain theorems, when proved true, counterintuitively retard
progress in certain domains. Lloyd Trefethen provides two examples:
- Faber's Theorem on polynomial interpolation: Interpreted as saying that polynomial interpolants are useless, but they are quite useful if the function is Lipschitz-continuous.
- Squire's Theorem on hydrodynamic instability: Applies in the limit $t to infty$ but (nondimensional) $t$ is rarely more than $100$.
Trefethen, Lloyd N. "Inverse Yogiisms." Notices of the American Mathematical Society 63, no. 11 (2016).
Also: The Best Writing on Mathematics 2017 6 (2017): 28.
Google books link.
In my own experience, I have witnessed the several negative-result theorems
proved in
Marvin Minsky and Seymour A. Papert.
Perceptrons: An Introduction to Computational Geometry, 1969.
MIT Press.
impede progress in neural-net research for more than a decade.1
Q. What are other examples of theorems whose (correct) proofs (possibly temporarily)
suppressed research advancement in mathematical subfields?
1
Olazaran, Mikel. "A sociological study of the official history of the perceptrons controversy." Social Studies of Science 26, no. 3 (1996): 611-659.
Abstract: "[...]I devote particular attention to the proofs and arguments of Minsky and Papert, which were interpreted as showing that further progress in neural nets was not possible, and that this approach to AI had to be abandoned.[...]"
RG link.
ho.history-overview big-picture
$endgroup$
3
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
7
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
8
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
2
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
6
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30
|
show 8 more comments
$begingroup$
It may be that certain theorems, when proved true, counterintuitively retard
progress in certain domains. Lloyd Trefethen provides two examples:
- Faber's Theorem on polynomial interpolation: Interpreted as saying that polynomial interpolants are useless, but they are quite useful if the function is Lipschitz-continuous.
- Squire's Theorem on hydrodynamic instability: Applies in the limit $t to infty$ but (nondimensional) $t$ is rarely more than $100$.
Trefethen, Lloyd N. "Inverse Yogiisms." Notices of the American Mathematical Society 63, no. 11 (2016).
Also: The Best Writing on Mathematics 2017 6 (2017): 28.
Google books link.
In my own experience, I have witnessed the several negative-result theorems
proved in
Marvin Minsky and Seymour A. Papert.
Perceptrons: An Introduction to Computational Geometry, 1969.
MIT Press.
impede progress in neural-net research for more than a decade.1
Q. What are other examples of theorems whose (correct) proofs (possibly temporarily)
suppressed research advancement in mathematical subfields?
1
Olazaran, Mikel. "A sociological study of the official history of the perceptrons controversy." Social Studies of Science 26, no. 3 (1996): 611-659.
Abstract: "[...]I devote particular attention to the proofs and arguments of Minsky and Papert, which were interpreted as showing that further progress in neural nets was not possible, and that this approach to AI had to be abandoned.[...]"
RG link.
ho.history-overview big-picture
$endgroup$
It may be that certain theorems, when proved true, counterintuitively retard
progress in certain domains. Lloyd Trefethen provides two examples:
- Faber's Theorem on polynomial interpolation: Interpreted as saying that polynomial interpolants are useless, but they are quite useful if the function is Lipschitz-continuous.
- Squire's Theorem on hydrodynamic instability: Applies in the limit $t to infty$ but (nondimensional) $t$ is rarely more than $100$.
Trefethen, Lloyd N. "Inverse Yogiisms." Notices of the American Mathematical Society 63, no. 11 (2016).
Also: The Best Writing on Mathematics 2017 6 (2017): 28.
Google books link.
In my own experience, I have witnessed the several negative-result theorems
proved in
Marvin Minsky and Seymour A. Papert.
Perceptrons: An Introduction to Computational Geometry, 1969.
MIT Press.
impede progress in neural-net research for more than a decade.1
Q. What are other examples of theorems whose (correct) proofs (possibly temporarily)
suppressed research advancement in mathematical subfields?
1
Olazaran, Mikel. "A sociological study of the official history of the perceptrons controversy." Social Studies of Science 26, no. 3 (1996): 611-659.
Abstract: "[...]I devote particular attention to the proofs and arguments of Minsky and Papert, which were interpreted as showing that further progress in neural nets was not possible, and that this approach to AI had to be abandoned.[...]"
RG link.
ho.history-overview big-picture
ho.history-overview big-picture
edited Apr 5 at 21:02
community wiki
4 revs, 2 users 98%
Joseph O'Rourke
3
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
7
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
8
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
2
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
6
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30
|
show 8 more comments
3
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
7
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
8
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
2
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
6
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30
3
3
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
7
7
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
8
8
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
2
2
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
6
6
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30
|
show 8 more comments
15 Answers
15
active
oldest
votes
$begingroup$
I don't know the history, but I've heard it said that the realization that higher homotopy groups are abelian lead to people thinking the notion was useless for some time.
$endgroup$
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
add a comment |
$begingroup$
I'm not an expert, but as far as I know one of the most important results in information theory, the Shannon sampling theorem, was pretty limiting. Many applied works were thought to have errors in them, because it seemed that they were violating this theorem, until compressed sensing came along.
A quote from here:
In other fields, such as magnetic resonance imaging, researchers also found that they could “undersample” the data and still get good results. At scientific meetings, Donoho says,they always encountered skepticism because they were trying to do something that was supposed to be impossible. In retrospect, he says that they needed a sort of mathematical “certificate,” a stamp of approval that would guarantee when random sampling works.
$endgroup$
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
add a comment |
$begingroup$
I have been told that Thurston's work on foliations (for example: Thurston, W. P., Existence of codimension-one foliations, Ann. of Math. (2) 104 (1976), no. 2, 249–268) essentially ended the subject for some time, even though there was still much work to be done.
Here is a quote from his On Proof and Progress in Mathematics:
"First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn't matter here whether you know what foliations are.) At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog. An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well. I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable."
$endgroup$
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
add a comment |
$begingroup$
I would vote for the classification of finite simple groups as an example of a (hopefully correctly proved) theorem which impeded progress in the field since it was announced to be proved.
And a classical example are Hilbert's theorems in invariant theory that stopped constructive invariant theory for almost 100 years---here one should quote Rota: "Like the Arabian phoenix rising out of its ashes, the theory
of invariants, pronounced dead at the turn of the century, is once again at the
forefront of mathematics".
$endgroup$
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
|
show 9 more comments
$begingroup$
Work on neural networks certainly fell out of favor following the publications of Minsky and Papert in 1966-67. I have only skimmed portions of the sociology paper referenced in the question, but other circumstances suggest that the topic might have diminished in popularity even without their results.
Rosenblatt's perceptrons had certainly created a lot of enthusiasm,
perhaps the first big wave of excitement about AI (later followed by
the 1980's excitement about rule-based "expert systems" and the
current excitement about neural-net and other machine learning
techniques). One story involves a conference report on the
development of a perceptron that could accurately detect the presence
of army tanks in photographs of fields and forests -- the inputs involved digitizing each photograph into 16 pixels.
Much of the enthusiasm around perceptrons accrued to convergence theorems guaranteeing that if a perceptron could decide some question, basic learning
procedures would converge on suitable network weights. These theorems all played off of the linear threshold for perceptrons. I would guess that many perceptron
publications amounted to retelling of some standard result about
linear transformations in the perceptron setting.
Then, Minsky and Papert's major paper aimed at mathematicians appeared, bearing the title "Linearly unrecognizable patterns". This title is both accurate and misleading in its own right.
The title is accurate because the primary result was that some patterns are not recognizable by linear threshold machines. But obviously, this result alone does not doom neural networks. If linear transformations are not adequate to characterize some patterns of interest, how about polynomials? Trigonometric and exponential functions? There was no lack of alternatives available to study.
The title is misleading in that it
makes no mention of Minsky and Papert's second main result, that even
when considering only linearly-recognizable patterns, the learned
weights would require astronomically-sized precision, and hence
astronomical amounts of data to determine these weights.
So neural network researchers faced two problems:
Computers were laughably small compared to machines today, and the
amount of data one could obtain was tiny compared to the vast
quantities available today. For decades, lack of sufficient data, and
of machines capable of manipulating this data, impeded many parts of
AI, both symbolic and nonsymbolic.
Also, neural network methods were not explicable. Minsky and others championed heuristic and symbolic methods, whose answers and actions can be
explained in terms that mean something to people. In contrast, neural
networks operated as black boxes, systems trained to provide an
answer, but incapable of providing any explanation meaningful to
people. This limitation persists to this day. For decades, it
provided an easy retort to neural-net proponents, for in many fields
such as medical diagnosis, no one will follow machine recommendations
without reasonable explanations.
Circumstances have changed since then. Hinton and others persisted in developing techniques for multilayer networks, investigating ideas based on nonlinear methods from classical physics. Processing speeds and data availability have increased. Also the recent solutions of interesting problems, such as non-trivial Go, have given much of a reason to consider using neural networks even without explicability.
So it seems to me that the
inattention to neural network research had a lot to do with waiting for
computational progress, and that the unrecognizability
result on its own does not provide a sufficient explanation.
$endgroup$
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
add a comment |
$begingroup$
Here I quote from the introduction to "Shelah’s pcf theory and its applications" by Burke and Magidor (https://core.ac.uk/download/pdf/82500424.pdf):
Cardinal arithmetic seems to be one of the central topics of set theory. (We
mean mainly cardinal exponentiation, the other operations being trivial.)
However, the independence results obtained by Cohen’s forcing technique
(especially Easton’s theorem: see below) showed that many of the open problems
in cardinal arithmetic are independent of the axioms of ZFC (Zermelo-Fraenkel
set theory with the axiom of choice). It appeared, in the late sixties, that cardinal arithmetic had become trivial in the sense that any potential theorem seemed to be refutable by the construction of a model of set theory which violated it.
In particular, Easton’s theorem showed that essentially any cardinal
arithmetic ‘behavior’ satisfying some obvious requirements can be realized as the
behavior of the power function at regular cardinals. [...]
The general consensus among set theorists was that the restriction to regular cardinals was due to a weakness in the proof and that a slight improvement in the methods for constructing models would show that, even for powers of singular cardinals, there are no deep theorems provable in ZFC.
They go on to explain how Shelah's pcf theory (and its precursors) in fact show that there are many nontrivial theorems about inequalities of cardinals provable in ZFC.
So arguably the earlier independence results impeded the discovery of these provable inequalities, although I don't know how strongly anyone would argue that.
$endgroup$
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
add a comment |
$begingroup$
I'm not sure if this counts as its not clear it really set the field back.
That being said, in 1951 H. Hopf showed that the round sphere is the only immersed constant mean curvature (CMC) surface that is closed and of genus $0$ in $mathbb{R}^3$. This lead him to conjecture that the round sphere is actually the only closed CMC surface in $mathbb{R}^3$. A few years later, in 1956, A.D Alexandrov provided additional evidence of this conjecture by proving that the round sphere is the only embedded closed CMC surface. My understanding is that several incorrect proofs of Hopf's conjecture were circulated in the subsequent decades.
It wasn't until 1983 that H. Wente showed that the conjecture was actually false by constructing an immersed CMC torus. There are now many examples of closed CMC immersions.
$endgroup$
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
add a comment |
$begingroup$
The proof that a particular computational problem is NP-complete can cause people to stop trying to make theoretical progress on it, instead focusing all their attention on heuristics that have only empirical support. In fact, one can often continue to make theoretical progress by designing approximation algorithms with provable performance guarantees, or using concepts from parameterized complexity to devise algorithms that run in polynomial time when some parameter is fixed.
This is not quite like the other examples since it's not a single theorem, but a whole class of theorems, and part of the problem is that the community that is interested in hard computational problems is very large, spanning many scientific fields. Theoretical computer scientists do active research into approximation algorithms and parameterized algorithms, but the point is that the possibility of theoretical progress on NP-complete problems has not been disseminated and socialized among everyone who might benefit from that knowledge.
$endgroup$
add a comment |
$begingroup$
Like RBega2 I hesitate to say that this is definitely an example, but the paper "Natural Proofs" by Razborov and Rudich, which showed that certain kinds of proof techniques would be insufficient to prove $Pne NP$, said:
We do not conclude that researchers should give up on proving serious lower bounds. Quite to the contrary, by classifying a large number of techniques that are unable to do the job we hope to focus research in a more fruitful direction. Pessimism will only be warranted if a long period of time passes without the discovery of a non-naturalizing lower bound proof.
In practice, RR's result seems to have been taken as a strong negative result, and may have taken a lot of the wind out of the sails of research into lower bounds. However, I hesitate to claim that people misinterpreted their paper, since one can plausibly argue that the real reason for lack of progress post-RR was simply the difficulty of proving lower bounds.
$endgroup$
add a comment |
$begingroup$
Tverberg's theorem (https://en.wikipedia.org/wiki/Tverberg%27s_theorem) says that for any $d$ and $r$, in $d$ dimensional Euclidean space any set of $(d+1)(r-1)+1$ points can be partitioned into r subsets, such that the convex hulls of the subsets all contain a common point. It is a generalization of Radon's theorem.
The "topological Tverberg theorem" (really a conjecture) is the assertion that for any $d$ and $r$, and any continuous map $fcolon deltaDelta_{(d+1)(r-1)+1}to mathbb{R}^d$ from the boundary of the $(d+1)(r-1)+1$-dimensional simplex to $mathbb{R}^d$, there exist a collection $Delta^1,ldots,Delta^rsubseteqDelta_{(d+1)(r-1)+1}$ of complementary subsimplices such that $f(Delta^1)cap cdots cap f(Delta^r)neq varnothing$. (Tverberg theorem's theorem is the same but for $f$ linear.)
In [BSS] this was proven in the case of $r$ prime, and in the famous unpublished paper [M. Ozaydin, Equivariant maps for the symmetric group, http://digital.library.wisc.edu/1793/63829] it was proven for $r$ a power of a prime. These proofs use some pretty advanced tools from algebraic topology (equivariant cohomology, et cetera).
I don't know for sure, but my impression is that, at least for quite a while, people believed that the restriction to $r$ a power of a prime was just an artifact of the tools used, and that with more work the theorem could be proved for all $r$. But in 2015, F. Frick [https://arxiv.org/abs/1502.00947], building on work of Mabbilard and Wagner [https://arxiv.org/abs/1508.02349], showed that the conjecture is false for any $r$ not a power of a prime.
Barany, I.; Shlosman, S. B.; Szűcs, András, On a topological generalization of a theorem of Tverberg, J. Lond. Math. Soc., II. Ser. 23, 158-164 (1981). ZBL0453.55003.
$endgroup$
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
add a comment |
$begingroup$
In 1991, Wald and Iyer showed that there exist foliations of the Schwarzschild spacetime that do not contain apparent horizons, yet get arbitrarily close to the singularity (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.44.R3719). This led many people to think that apparent horizons are unreliable tools to detect gravitational collapse, that they are inferior to event horizons, and interest into them vaned.
Starting in 2005, numerical results demonstrated that apparent horizons are actually quite reliable and useful in practice (https://link.springer.com/article/10.12942/lrr-2007-3), prompting further theoretical studies. Today we know that generalizations of apparent horizons still exist in these foliations, that they form smooth world tubes, and that e.g. black hole thermodynamics can be extended to such world tubes (called "dynamical horizons", https://en.wikipedia.org/wiki/Dynamical_horizon).
$endgroup$
add a comment |
$begingroup$
This is an example taken from physics, but I think it fits your question pretty well.
In his 1932 book Mathematische Grundlagen der Quantenmechanik Von Neumann presented a "proof" that was widely (and erroneously) believed to prove that all hidden-variable theories are impossible. More on this can be found here: https://arxiv.org/abs/1006.0499. This was supposed to seal the deal in favor of Bohr between the Einstein-Bohr controversy so to speak. I believe it is fair to say that for a long time such foundational problems fell out of the mainstream interests of physics and only outsiders, so to speak, studied the problem.
The "proof" was in fact wrong in a sense, and Bohm (in 1952) was able to come up with his pilot-wave theory that has the same predictive power as quantum mechanics but it's basically classical. It is in fact a hidden variable theory.
The resolution of all this came more than 30 years later, in 1965, when Bell showed that quantum mechanics does not satisfies local realism.
$endgroup$
add a comment |
$begingroup$
The Mermin-Wagner theorem (1966) proved that for two-dimensional models with a continuous symmetry there was no finite temperature transition to a phase with long-range order via spontaneous breaking of this symmetry.
My understanding is that this was generally interpreted as proving that "two-dimensional models with continuous symmetries do not have phase transitions". I do not know for sure that this impeded progress, but I suspect that as it appeared to be apparently a no-go theorem this would have prevented researchers from looking for examples of phase transitions.
Remarkably, not too long after the theorem was published, Kosterlitz and Thouless (1973) showed that the two-dimensional XY model does indeed have a phase transition with diverging correlation length, but no long-range order. The mechanism was completely different (due to topological effects), and so managed to avoid the apparently hard constraints imposed by the theorem.
$endgroup$
add a comment |
$begingroup$
I am by no means an expert on the subject but I've been told that Hilbert's work on invariant theory pretty much brought the subject to a halt for quite some time.
$endgroup$
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
add a comment |
$begingroup$
The Abel-Ruffini Theorem could have been a "dead-end" theorem, as could the fundamental theorem of Galois theory. Similarly, Godel's Incompleteness Theorem(s) and perhaps even his Completeness Theorem could be seen as dead-end theorems.
Many such theorems would (at first sight) appear to be the "last chapter" in a certain "book" which began with a question and thus ended when that question was answered (either positively or negatively).
However, in many cases, a closer examination led to deeper questions and answers. Galois' work, and Godel's as well, had the additional problem of being difficult to understand for the average mathematician of their times. The "in a nutshell" summary that the latter got must have only accentuated the feeling that these topics had been closed.
$endgroup$
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
|
show 1 more comment
protected by Community♦ Apr 6 at 3:49
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
15 Answers
15
active
oldest
votes
15 Answers
15
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
I don't know the history, but I've heard it said that the realization that higher homotopy groups are abelian lead to people thinking the notion was useless for some time.
$endgroup$
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
add a comment |
$begingroup$
I don't know the history, but I've heard it said that the realization that higher homotopy groups are abelian lead to people thinking the notion was useless for some time.
$endgroup$
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
add a comment |
$begingroup$
I don't know the history, but I've heard it said that the realization that higher homotopy groups are abelian lead to people thinking the notion was useless for some time.
$endgroup$
I don't know the history, but I've heard it said that the realization that higher homotopy groups are abelian lead to people thinking the notion was useless for some time.
edited Apr 4 at 23:43
community wiki
Daniel McLaury
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
add a comment |
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
1
1
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
$begingroup$
Who realized "that higher homotopy groups are abelian"? Could you provide more details, citations?
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:54
9
9
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
$begingroup$
@JosephO'Rourke: see mathoverflow.net/a/13902/25028
$endgroup$
– Sam Hopkins
Apr 4 at 23:57
add a comment |
$begingroup$
I'm not an expert, but as far as I know one of the most important results in information theory, the Shannon sampling theorem, was pretty limiting. Many applied works were thought to have errors in them, because it seemed that they were violating this theorem, until compressed sensing came along.
A quote from here:
In other fields, such as magnetic resonance imaging, researchers also found that they could “undersample” the data and still get good results. At scientific meetings, Donoho says,they always encountered skepticism because they were trying to do something that was supposed to be impossible. In retrospect, he says that they needed a sort of mathematical “certificate,” a stamp of approval that would guarantee when random sampling works.
$endgroup$
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
add a comment |
$begingroup$
I'm not an expert, but as far as I know one of the most important results in information theory, the Shannon sampling theorem, was pretty limiting. Many applied works were thought to have errors in them, because it seemed that they were violating this theorem, until compressed sensing came along.
A quote from here:
In other fields, such as magnetic resonance imaging, researchers also found that they could “undersample” the data and still get good results. At scientific meetings, Donoho says,they always encountered skepticism because they were trying to do something that was supposed to be impossible. In retrospect, he says that they needed a sort of mathematical “certificate,” a stamp of approval that would guarantee when random sampling works.
$endgroup$
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
add a comment |
$begingroup$
I'm not an expert, but as far as I know one of the most important results in information theory, the Shannon sampling theorem, was pretty limiting. Many applied works were thought to have errors in them, because it seemed that they were violating this theorem, until compressed sensing came along.
A quote from here:
In other fields, such as magnetic resonance imaging, researchers also found that they could “undersample” the data and still get good results. At scientific meetings, Donoho says,they always encountered skepticism because they were trying to do something that was supposed to be impossible. In retrospect, he says that they needed a sort of mathematical “certificate,” a stamp of approval that would guarantee when random sampling works.
$endgroup$
I'm not an expert, but as far as I know one of the most important results in information theory, the Shannon sampling theorem, was pretty limiting. Many applied works were thought to have errors in them, because it seemed that they were violating this theorem, until compressed sensing came along.
A quote from here:
In other fields, such as magnetic resonance imaging, researchers also found that they could “undersample” the data and still get good results. At scientific meetings, Donoho says,they always encountered skepticism because they were trying to do something that was supposed to be impossible. In retrospect, he says that they needed a sort of mathematical “certificate,” a stamp of approval that would guarantee when random sampling works.
edited Apr 5 at 12:04
community wiki
Ivan
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
add a comment |
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
3
3
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
$begingroup$
On the flip side, the use of compressed sensing ideas in statistical analysis of non-randomly sampled data might well qualify as an answer to this post. The subsequent flood "oracle results" for statistical estimators and the applied popularity of the lasso really flies in the face of the fact that the "near orthogonality" needed as an assumption in these proofs is scarcely ever satisfied in found data. That is to say, there is a truly huge body of literature that is essentially using a forged "certificate" because they tacitly suggest that sparse methods can be applied in non-random samples.
$endgroup$
– R Hahn
Apr 5 at 16:01
add a comment |
$begingroup$
I have been told that Thurston's work on foliations (for example: Thurston, W. P., Existence of codimension-one foliations, Ann. of Math. (2) 104 (1976), no. 2, 249–268) essentially ended the subject for some time, even though there was still much work to be done.
Here is a quote from his On Proof and Progress in Mathematics:
"First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn't matter here whether you know what foliations are.) At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog. An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well. I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable."
$endgroup$
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
add a comment |
$begingroup$
I have been told that Thurston's work on foliations (for example: Thurston, W. P., Existence of codimension-one foliations, Ann. of Math. (2) 104 (1976), no. 2, 249–268) essentially ended the subject for some time, even though there was still much work to be done.
Here is a quote from his On Proof and Progress in Mathematics:
"First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn't matter here whether you know what foliations are.) At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog. An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well. I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable."
$endgroup$
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
add a comment |
$begingroup$
I have been told that Thurston's work on foliations (for example: Thurston, W. P., Existence of codimension-one foliations, Ann. of Math. (2) 104 (1976), no. 2, 249–268) essentially ended the subject for some time, even though there was still much work to be done.
Here is a quote from his On Proof and Progress in Mathematics:
"First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn't matter here whether you know what foliations are.) At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog. An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well. I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable."
$endgroup$
I have been told that Thurston's work on foliations (for example: Thurston, W. P., Existence of codimension-one foliations, Ann. of Math. (2) 104 (1976), no. 2, 249–268) essentially ended the subject for some time, even though there was still much work to be done.
Here is a quote from his On Proof and Progress in Mathematics:
"First I will discuss briefly the theory of foliations, which was my first subject, starting when I was a graduate student. (It doesn't matter here whether you know what foliations are.) At that time, foliations had become a big center of attention among geometric topologists, dynamical systems people, and differential geometers. I fairly rapidly proved some dramatic theorems. I proved a classification theorem for foliations, giving a necessary and sufficient condition for a manifold to admit a foliation. I proved a number of other significant theorems. I wrote respectable papers and published at least the most important theorems. It was hard to find the time to write to keep up with what I could prove, and I built up a backlog. An interesting phenomenon occurred. Within a couple of years, a dramatic evacuation of the field started to take place. I heard from a number of mathematicians that they were giving or receiving advice not to go into foliations—they were saying that Thurston was cleaning it out. People told me (not as a complaint, but as a compliment) that I was killing the field. Graduate students stopped studying foliations, and fairly soon, I turned to other interests as well. I do not think that the evacuation occurred because the territory was intellectually exhausted—there were (and still are) many interesting questions that remain and that are probably approachable."
edited Apr 5 at 2:22
community wiki
Sean Lawton
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
add a comment |
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
35
35
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
$begingroup$
These mathematicians were afraid to compete with Thurston in foliations, particularly given his trove of unpublished results. Meanwhile the examples in the post show mathematicians confidently drawing the wrong intuitions from others’a results. So I find Thurston an interesting example of a different phenomenon.
$endgroup$
– Matt F.
Apr 5 at 3:06
add a comment |
$begingroup$
I would vote for the classification of finite simple groups as an example of a (hopefully correctly proved) theorem which impeded progress in the field since it was announced to be proved.
And a classical example are Hilbert's theorems in invariant theory that stopped constructive invariant theory for almost 100 years---here one should quote Rota: "Like the Arabian phoenix rising out of its ashes, the theory
of invariants, pronounced dead at the turn of the century, is once again at the
forefront of mathematics".
$endgroup$
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
|
show 9 more comments
$begingroup$
I would vote for the classification of finite simple groups as an example of a (hopefully correctly proved) theorem which impeded progress in the field since it was announced to be proved.
And a classical example are Hilbert's theorems in invariant theory that stopped constructive invariant theory for almost 100 years---here one should quote Rota: "Like the Arabian phoenix rising out of its ashes, the theory
of invariants, pronounced dead at the turn of the century, is once again at the
forefront of mathematics".
$endgroup$
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
|
show 9 more comments
$begingroup$
I would vote for the classification of finite simple groups as an example of a (hopefully correctly proved) theorem which impeded progress in the field since it was announced to be proved.
And a classical example are Hilbert's theorems in invariant theory that stopped constructive invariant theory for almost 100 years---here one should quote Rota: "Like the Arabian phoenix rising out of its ashes, the theory
of invariants, pronounced dead at the turn of the century, is once again at the
forefront of mathematics".
$endgroup$
I would vote for the classification of finite simple groups as an example of a (hopefully correctly proved) theorem which impeded progress in the field since it was announced to be proved.
And a classical example are Hilbert's theorems in invariant theory that stopped constructive invariant theory for almost 100 years---here one should quote Rota: "Like the Arabian phoenix rising out of its ashes, the theory
of invariants, pronounced dead at the turn of the century, is once again at the
forefront of mathematics".
answered Apr 5 at 16:51
community wiki
Dima Pasechnik
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
|
show 9 more comments
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
1
1
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
$begingroup$
Which progress should have been made, but wasn't, because of CFSG? Of course you don't have to state a specific theorem you think would have been proved, but some research directions...
$endgroup$
– Will Sawin
Apr 5 at 21:55
10
10
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
$begingroup$
CFSG was being rushed, with a considerable funding from NSF, and a somewhat premature announcement of completion trashed the desire to work on the topic, and even made people jobless, see e.g. en.wikipedia.org/wiki/… When I was starting doing maths over 30 years ago, there was a general feeling that it's "done"; how many people below retirement age understand now details of CFSG proof?
$endgroup$
– Dima Pasechnik
Apr 5 at 22:35
4
4
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
$begingroup$
So I guess what this implies is that the main research area that the proof of the classification of finite simple groups impeded progress on was the classification of finite simple groups.
$endgroup$
– Will Sawin
Apr 6 at 14:14
3
3
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
$begingroup$
@Will Sawin: my impression is that it significantly slowed progress in finite group theory in general. There was, and still is, much to be done regarding group cohomology and modular representation theory. The finite groups are the building blocks, but it remains to study how they can be glued together.
$endgroup$
– Joshua Grochow
Apr 7 at 2:59
1
1
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
$begingroup$
@WillSawin - yes, CFSG was being sold, in particular to funding bodies, as the holy graal of the theory of finite groups, like the rest of it is merely meant to help it along. Perhaps it’s the first example of an overhyped pure maths result. :-)
$endgroup$
– Dima Pasechnik
Apr 7 at 18:24
|
show 9 more comments
$begingroup$
Work on neural networks certainly fell out of favor following the publications of Minsky and Papert in 1966-67. I have only skimmed portions of the sociology paper referenced in the question, but other circumstances suggest that the topic might have diminished in popularity even without their results.
Rosenblatt's perceptrons had certainly created a lot of enthusiasm,
perhaps the first big wave of excitement about AI (later followed by
the 1980's excitement about rule-based "expert systems" and the
current excitement about neural-net and other machine learning
techniques). One story involves a conference report on the
development of a perceptron that could accurately detect the presence
of army tanks in photographs of fields and forests -- the inputs involved digitizing each photograph into 16 pixels.
Much of the enthusiasm around perceptrons accrued to convergence theorems guaranteeing that if a perceptron could decide some question, basic learning
procedures would converge on suitable network weights. These theorems all played off of the linear threshold for perceptrons. I would guess that many perceptron
publications amounted to retelling of some standard result about
linear transformations in the perceptron setting.
Then, Minsky and Papert's major paper aimed at mathematicians appeared, bearing the title "Linearly unrecognizable patterns". This title is both accurate and misleading in its own right.
The title is accurate because the primary result was that some patterns are not recognizable by linear threshold machines. But obviously, this result alone does not doom neural networks. If linear transformations are not adequate to characterize some patterns of interest, how about polynomials? Trigonometric and exponential functions? There was no lack of alternatives available to study.
The title is misleading in that it
makes no mention of Minsky and Papert's second main result, that even
when considering only linearly-recognizable patterns, the learned
weights would require astronomically-sized precision, and hence
astronomical amounts of data to determine these weights.
So neural network researchers faced two problems:
Computers were laughably small compared to machines today, and the
amount of data one could obtain was tiny compared to the vast
quantities available today. For decades, lack of sufficient data, and
of machines capable of manipulating this data, impeded many parts of
AI, both symbolic and nonsymbolic.
Also, neural network methods were not explicable. Minsky and others championed heuristic and symbolic methods, whose answers and actions can be
explained in terms that mean something to people. In contrast, neural
networks operated as black boxes, systems trained to provide an
answer, but incapable of providing any explanation meaningful to
people. This limitation persists to this day. For decades, it
provided an easy retort to neural-net proponents, for in many fields
such as medical diagnosis, no one will follow machine recommendations
without reasonable explanations.
Circumstances have changed since then. Hinton and others persisted in developing techniques for multilayer networks, investigating ideas based on nonlinear methods from classical physics. Processing speeds and data availability have increased. Also the recent solutions of interesting problems, such as non-trivial Go, have given much of a reason to consider using neural networks even without explicability.
So it seems to me that the
inattention to neural network research had a lot to do with waiting for
computational progress, and that the unrecognizability
result on its own does not provide a sufficient explanation.
$endgroup$
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
add a comment |
$begingroup$
Work on neural networks certainly fell out of favor following the publications of Minsky and Papert in 1966-67. I have only skimmed portions of the sociology paper referenced in the question, but other circumstances suggest that the topic might have diminished in popularity even without their results.
Rosenblatt's perceptrons had certainly created a lot of enthusiasm,
perhaps the first big wave of excitement about AI (later followed by
the 1980's excitement about rule-based "expert systems" and the
current excitement about neural-net and other machine learning
techniques). One story involves a conference report on the
development of a perceptron that could accurately detect the presence
of army tanks in photographs of fields and forests -- the inputs involved digitizing each photograph into 16 pixels.
Much of the enthusiasm around perceptrons accrued to convergence theorems guaranteeing that if a perceptron could decide some question, basic learning
procedures would converge on suitable network weights. These theorems all played off of the linear threshold for perceptrons. I would guess that many perceptron
publications amounted to retelling of some standard result about
linear transformations in the perceptron setting.
Then, Minsky and Papert's major paper aimed at mathematicians appeared, bearing the title "Linearly unrecognizable patterns". This title is both accurate and misleading in its own right.
The title is accurate because the primary result was that some patterns are not recognizable by linear threshold machines. But obviously, this result alone does not doom neural networks. If linear transformations are not adequate to characterize some patterns of interest, how about polynomials? Trigonometric and exponential functions? There was no lack of alternatives available to study.
The title is misleading in that it
makes no mention of Minsky and Papert's second main result, that even
when considering only linearly-recognizable patterns, the learned
weights would require astronomically-sized precision, and hence
astronomical amounts of data to determine these weights.
So neural network researchers faced two problems:
Computers were laughably small compared to machines today, and the
amount of data one could obtain was tiny compared to the vast
quantities available today. For decades, lack of sufficient data, and
of machines capable of manipulating this data, impeded many parts of
AI, both symbolic and nonsymbolic.
Also, neural network methods were not explicable. Minsky and others championed heuristic and symbolic methods, whose answers and actions can be
explained in terms that mean something to people. In contrast, neural
networks operated as black boxes, systems trained to provide an
answer, but incapable of providing any explanation meaningful to
people. This limitation persists to this day. For decades, it
provided an easy retort to neural-net proponents, for in many fields
such as medical diagnosis, no one will follow machine recommendations
without reasonable explanations.
Circumstances have changed since then. Hinton and others persisted in developing techniques for multilayer networks, investigating ideas based on nonlinear methods from classical physics. Processing speeds and data availability have increased. Also the recent solutions of interesting problems, such as non-trivial Go, have given much of a reason to consider using neural networks even without explicability.
So it seems to me that the
inattention to neural network research had a lot to do with waiting for
computational progress, and that the unrecognizability
result on its own does not provide a sufficient explanation.
$endgroup$
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
add a comment |
$begingroup$
Work on neural networks certainly fell out of favor following the publications of Minsky and Papert in 1966-67. I have only skimmed portions of the sociology paper referenced in the question, but other circumstances suggest that the topic might have diminished in popularity even without their results.
Rosenblatt's perceptrons had certainly created a lot of enthusiasm,
perhaps the first big wave of excitement about AI (later followed by
the 1980's excitement about rule-based "expert systems" and the
current excitement about neural-net and other machine learning
techniques). One story involves a conference report on the
development of a perceptron that could accurately detect the presence
of army tanks in photographs of fields and forests -- the inputs involved digitizing each photograph into 16 pixels.
Much of the enthusiasm around perceptrons accrued to convergence theorems guaranteeing that if a perceptron could decide some question, basic learning
procedures would converge on suitable network weights. These theorems all played off of the linear threshold for perceptrons. I would guess that many perceptron
publications amounted to retelling of some standard result about
linear transformations in the perceptron setting.
Then, Minsky and Papert's major paper aimed at mathematicians appeared, bearing the title "Linearly unrecognizable patterns". This title is both accurate and misleading in its own right.
The title is accurate because the primary result was that some patterns are not recognizable by linear threshold machines. But obviously, this result alone does not doom neural networks. If linear transformations are not adequate to characterize some patterns of interest, how about polynomials? Trigonometric and exponential functions? There was no lack of alternatives available to study.
The title is misleading in that it
makes no mention of Minsky and Papert's second main result, that even
when considering only linearly-recognizable patterns, the learned
weights would require astronomically-sized precision, and hence
astronomical amounts of data to determine these weights.
So neural network researchers faced two problems:
Computers were laughably small compared to machines today, and the
amount of data one could obtain was tiny compared to the vast
quantities available today. For decades, lack of sufficient data, and
of machines capable of manipulating this data, impeded many parts of
AI, both symbolic and nonsymbolic.
Also, neural network methods were not explicable. Minsky and others championed heuristic and symbolic methods, whose answers and actions can be
explained in terms that mean something to people. In contrast, neural
networks operated as black boxes, systems trained to provide an
answer, but incapable of providing any explanation meaningful to
people. This limitation persists to this day. For decades, it
provided an easy retort to neural-net proponents, for in many fields
such as medical diagnosis, no one will follow machine recommendations
without reasonable explanations.
Circumstances have changed since then. Hinton and others persisted in developing techniques for multilayer networks, investigating ideas based on nonlinear methods from classical physics. Processing speeds and data availability have increased. Also the recent solutions of interesting problems, such as non-trivial Go, have given much of a reason to consider using neural networks even without explicability.
So it seems to me that the
inattention to neural network research had a lot to do with waiting for
computational progress, and that the unrecognizability
result on its own does not provide a sufficient explanation.
$endgroup$
Work on neural networks certainly fell out of favor following the publications of Minsky and Papert in 1966-67. I have only skimmed portions of the sociology paper referenced in the question, but other circumstances suggest that the topic might have diminished in popularity even without their results.
Rosenblatt's perceptrons had certainly created a lot of enthusiasm,
perhaps the first big wave of excitement about AI (later followed by
the 1980's excitement about rule-based "expert systems" and the
current excitement about neural-net and other machine learning
techniques). One story involves a conference report on the
development of a perceptron that could accurately detect the presence
of army tanks in photographs of fields and forests -- the inputs involved digitizing each photograph into 16 pixels.
Much of the enthusiasm around perceptrons accrued to convergence theorems guaranteeing that if a perceptron could decide some question, basic learning
procedures would converge on suitable network weights. These theorems all played off of the linear threshold for perceptrons. I would guess that many perceptron
publications amounted to retelling of some standard result about
linear transformations in the perceptron setting.
Then, Minsky and Papert's major paper aimed at mathematicians appeared, bearing the title "Linearly unrecognizable patterns". This title is both accurate and misleading in its own right.
The title is accurate because the primary result was that some patterns are not recognizable by linear threshold machines. But obviously, this result alone does not doom neural networks. If linear transformations are not adequate to characterize some patterns of interest, how about polynomials? Trigonometric and exponential functions? There was no lack of alternatives available to study.
The title is misleading in that it
makes no mention of Minsky and Papert's second main result, that even
when considering only linearly-recognizable patterns, the learned
weights would require astronomically-sized precision, and hence
astronomical amounts of data to determine these weights.
So neural network researchers faced two problems:
Computers were laughably small compared to machines today, and the
amount of data one could obtain was tiny compared to the vast
quantities available today. For decades, lack of sufficient data, and
of machines capable of manipulating this data, impeded many parts of
AI, both symbolic and nonsymbolic.
Also, neural network methods were not explicable. Minsky and others championed heuristic and symbolic methods, whose answers and actions can be
explained in terms that mean something to people. In contrast, neural
networks operated as black boxes, systems trained to provide an
answer, but incapable of providing any explanation meaningful to
people. This limitation persists to this day. For decades, it
provided an easy retort to neural-net proponents, for in many fields
such as medical diagnosis, no one will follow machine recommendations
without reasonable explanations.
Circumstances have changed since then. Hinton and others persisted in developing techniques for multilayer networks, investigating ideas based on nonlinear methods from classical physics. Processing speeds and data availability have increased. Also the recent solutions of interesting problems, such as non-trivial Go, have given much of a reason to consider using neural networks even without explicability.
So it seems to me that the
inattention to neural network research had a lot to do with waiting for
computational progress, and that the unrecognizability
result on its own does not provide a sufficient explanation.
edited Apr 5 at 22:42
community wiki
2 revs, 2 users 73%
Jon Doyle
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
add a comment |
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
$begingroup$
Thanks for the history!
$endgroup$
– Matt F.
Apr 5 at 22:48
3
3
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
$begingroup$
For a follow-up on neural networks, the Universal Approximation theory made many folks think that a one-layer neural network is all they need, leading to many suboptimal practical results in applications of neural networks.
$endgroup$
– liori
Apr 6 at 16:42
add a comment |
$begingroup$
Here I quote from the introduction to "Shelah’s pcf theory and its applications" by Burke and Magidor (https://core.ac.uk/download/pdf/82500424.pdf):
Cardinal arithmetic seems to be one of the central topics of set theory. (We
mean mainly cardinal exponentiation, the other operations being trivial.)
However, the independence results obtained by Cohen’s forcing technique
(especially Easton’s theorem: see below) showed that many of the open problems
in cardinal arithmetic are independent of the axioms of ZFC (Zermelo-Fraenkel
set theory with the axiom of choice). It appeared, in the late sixties, that cardinal arithmetic had become trivial in the sense that any potential theorem seemed to be refutable by the construction of a model of set theory which violated it.
In particular, Easton’s theorem showed that essentially any cardinal
arithmetic ‘behavior’ satisfying some obvious requirements can be realized as the
behavior of the power function at regular cardinals. [...]
The general consensus among set theorists was that the restriction to regular cardinals was due to a weakness in the proof and that a slight improvement in the methods for constructing models would show that, even for powers of singular cardinals, there are no deep theorems provable in ZFC.
They go on to explain how Shelah's pcf theory (and its precursors) in fact show that there are many nontrivial theorems about inequalities of cardinals provable in ZFC.
So arguably the earlier independence results impeded the discovery of these provable inequalities, although I don't know how strongly anyone would argue that.
$endgroup$
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
add a comment |
$begingroup$
Here I quote from the introduction to "Shelah’s pcf theory and its applications" by Burke and Magidor (https://core.ac.uk/download/pdf/82500424.pdf):
Cardinal arithmetic seems to be one of the central topics of set theory. (We
mean mainly cardinal exponentiation, the other operations being trivial.)
However, the independence results obtained by Cohen’s forcing technique
(especially Easton’s theorem: see below) showed that many of the open problems
in cardinal arithmetic are independent of the axioms of ZFC (Zermelo-Fraenkel
set theory with the axiom of choice). It appeared, in the late sixties, that cardinal arithmetic had become trivial in the sense that any potential theorem seemed to be refutable by the construction of a model of set theory which violated it.
In particular, Easton’s theorem showed that essentially any cardinal
arithmetic ‘behavior’ satisfying some obvious requirements can be realized as the
behavior of the power function at regular cardinals. [...]
The general consensus among set theorists was that the restriction to regular cardinals was due to a weakness in the proof and that a slight improvement in the methods for constructing models would show that, even for powers of singular cardinals, there are no deep theorems provable in ZFC.
They go on to explain how Shelah's pcf theory (and its precursors) in fact show that there are many nontrivial theorems about inequalities of cardinals provable in ZFC.
So arguably the earlier independence results impeded the discovery of these provable inequalities, although I don't know how strongly anyone would argue that.
$endgroup$
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
add a comment |
$begingroup$
Here I quote from the introduction to "Shelah’s pcf theory and its applications" by Burke and Magidor (https://core.ac.uk/download/pdf/82500424.pdf):
Cardinal arithmetic seems to be one of the central topics of set theory. (We
mean mainly cardinal exponentiation, the other operations being trivial.)
However, the independence results obtained by Cohen’s forcing technique
(especially Easton’s theorem: see below) showed that many of the open problems
in cardinal arithmetic are independent of the axioms of ZFC (Zermelo-Fraenkel
set theory with the axiom of choice). It appeared, in the late sixties, that cardinal arithmetic had become trivial in the sense that any potential theorem seemed to be refutable by the construction of a model of set theory which violated it.
In particular, Easton’s theorem showed that essentially any cardinal
arithmetic ‘behavior’ satisfying some obvious requirements can be realized as the
behavior of the power function at regular cardinals. [...]
The general consensus among set theorists was that the restriction to regular cardinals was due to a weakness in the proof and that a slight improvement in the methods for constructing models would show that, even for powers of singular cardinals, there are no deep theorems provable in ZFC.
They go on to explain how Shelah's pcf theory (and its precursors) in fact show that there are many nontrivial theorems about inequalities of cardinals provable in ZFC.
So arguably the earlier independence results impeded the discovery of these provable inequalities, although I don't know how strongly anyone would argue that.
$endgroup$
Here I quote from the introduction to "Shelah’s pcf theory and its applications" by Burke and Magidor (https://core.ac.uk/download/pdf/82500424.pdf):
Cardinal arithmetic seems to be one of the central topics of set theory. (We
mean mainly cardinal exponentiation, the other operations being trivial.)
However, the independence results obtained by Cohen’s forcing technique
(especially Easton’s theorem: see below) showed that many of the open problems
in cardinal arithmetic are independent of the axioms of ZFC (Zermelo-Fraenkel
set theory with the axiom of choice). It appeared, in the late sixties, that cardinal arithmetic had become trivial in the sense that any potential theorem seemed to be refutable by the construction of a model of set theory which violated it.
In particular, Easton’s theorem showed that essentially any cardinal
arithmetic ‘behavior’ satisfying some obvious requirements can be realized as the
behavior of the power function at regular cardinals. [...]
The general consensus among set theorists was that the restriction to regular cardinals was due to a weakness in the proof and that a slight improvement in the methods for constructing models would show that, even for powers of singular cardinals, there are no deep theorems provable in ZFC.
They go on to explain how Shelah's pcf theory (and its precursors) in fact show that there are many nontrivial theorems about inequalities of cardinals provable in ZFC.
So arguably the earlier independence results impeded the discovery of these provable inequalities, although I don't know how strongly anyone would argue that.
answered Apr 5 at 0:15
community wiki
Sam Hopkins
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
add a comment |
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
2
2
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
$begingroup$
An exemplary instance of my query. Possibly Cohen's forcing was the "culprit" in jumping so far that there was a natural retraction?
$endgroup$
– Joseph O'Rourke
Apr 5 at 12:42
2
2
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
Did the over-interpretation of the independence results really impede progress? My impression is that the later cardinal arithmetic results wouldn’t have been found any sooner if people had just continued in the earlier lines of research — they have a rather different flavour, and wouldn’t have been found without a now body of theory, informed by the independence results, guiding the way to them.
$endgroup$
– Peter LeFanu Lumsdaine
Apr 5 at 14:15
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
$begingroup$
@PeterLeFanuLumsdaine: sure, you are probably right, thus the disclaimer ("I don't know how strongly anyone would argue that") at the end.
$endgroup$
– Sam Hopkins
Apr 5 at 15:51
add a comment |
$begingroup$
I'm not sure if this counts as its not clear it really set the field back.
That being said, in 1951 H. Hopf showed that the round sphere is the only immersed constant mean curvature (CMC) surface that is closed and of genus $0$ in $mathbb{R}^3$. This lead him to conjecture that the round sphere is actually the only closed CMC surface in $mathbb{R}^3$. A few years later, in 1956, A.D Alexandrov provided additional evidence of this conjecture by proving that the round sphere is the only embedded closed CMC surface. My understanding is that several incorrect proofs of Hopf's conjecture were circulated in the subsequent decades.
It wasn't until 1983 that H. Wente showed that the conjecture was actually false by constructing an immersed CMC torus. There are now many examples of closed CMC immersions.
$endgroup$
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
add a comment |
$begingroup$
I'm not sure if this counts as its not clear it really set the field back.
That being said, in 1951 H. Hopf showed that the round sphere is the only immersed constant mean curvature (CMC) surface that is closed and of genus $0$ in $mathbb{R}^3$. This lead him to conjecture that the round sphere is actually the only closed CMC surface in $mathbb{R}^3$. A few years later, in 1956, A.D Alexandrov provided additional evidence of this conjecture by proving that the round sphere is the only embedded closed CMC surface. My understanding is that several incorrect proofs of Hopf's conjecture were circulated in the subsequent decades.
It wasn't until 1983 that H. Wente showed that the conjecture was actually false by constructing an immersed CMC torus. There are now many examples of closed CMC immersions.
$endgroup$
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
add a comment |
$begingroup$
I'm not sure if this counts as its not clear it really set the field back.
That being said, in 1951 H. Hopf showed that the round sphere is the only immersed constant mean curvature (CMC) surface that is closed and of genus $0$ in $mathbb{R}^3$. This lead him to conjecture that the round sphere is actually the only closed CMC surface in $mathbb{R}^3$. A few years later, in 1956, A.D Alexandrov provided additional evidence of this conjecture by proving that the round sphere is the only embedded closed CMC surface. My understanding is that several incorrect proofs of Hopf's conjecture were circulated in the subsequent decades.
It wasn't until 1983 that H. Wente showed that the conjecture was actually false by constructing an immersed CMC torus. There are now many examples of closed CMC immersions.
$endgroup$
I'm not sure if this counts as its not clear it really set the field back.
That being said, in 1951 H. Hopf showed that the round sphere is the only immersed constant mean curvature (CMC) surface that is closed and of genus $0$ in $mathbb{R}^3$. This lead him to conjecture that the round sphere is actually the only closed CMC surface in $mathbb{R}^3$. A few years later, in 1956, A.D Alexandrov provided additional evidence of this conjecture by proving that the round sphere is the only embedded closed CMC surface. My understanding is that several incorrect proofs of Hopf's conjecture were circulated in the subsequent decades.
It wasn't until 1983 that H. Wente showed that the conjecture was actually false by constructing an immersed CMC torus. There are now many examples of closed CMC immersions.
answered Apr 5 at 13:04
community wiki
RBega2
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
add a comment |
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
$begingroup$
Here is a picture of Wente's construction: researchgate.net/figure/…
$endgroup$
– Neal
Apr 5 at 18:42
add a comment |
$begingroup$
The proof that a particular computational problem is NP-complete can cause people to stop trying to make theoretical progress on it, instead focusing all their attention on heuristics that have only empirical support. In fact, one can often continue to make theoretical progress by designing approximation algorithms with provable performance guarantees, or using concepts from parameterized complexity to devise algorithms that run in polynomial time when some parameter is fixed.
This is not quite like the other examples since it's not a single theorem, but a whole class of theorems, and part of the problem is that the community that is interested in hard computational problems is very large, spanning many scientific fields. Theoretical computer scientists do active research into approximation algorithms and parameterized algorithms, but the point is that the possibility of theoretical progress on NP-complete problems has not been disseminated and socialized among everyone who might benefit from that knowledge.
$endgroup$
add a comment |
$begingroup$
The proof that a particular computational problem is NP-complete can cause people to stop trying to make theoretical progress on it, instead focusing all their attention on heuristics that have only empirical support. In fact, one can often continue to make theoretical progress by designing approximation algorithms with provable performance guarantees, or using concepts from parameterized complexity to devise algorithms that run in polynomial time when some parameter is fixed.
This is not quite like the other examples since it's not a single theorem, but a whole class of theorems, and part of the problem is that the community that is interested in hard computational problems is very large, spanning many scientific fields. Theoretical computer scientists do active research into approximation algorithms and parameterized algorithms, but the point is that the possibility of theoretical progress on NP-complete problems has not been disseminated and socialized among everyone who might benefit from that knowledge.
$endgroup$
add a comment |
$begingroup$
The proof that a particular computational problem is NP-complete can cause people to stop trying to make theoretical progress on it, instead focusing all their attention on heuristics that have only empirical support. In fact, one can often continue to make theoretical progress by designing approximation algorithms with provable performance guarantees, or using concepts from parameterized complexity to devise algorithms that run in polynomial time when some parameter is fixed.
This is not quite like the other examples since it's not a single theorem, but a whole class of theorems, and part of the problem is that the community that is interested in hard computational problems is very large, spanning many scientific fields. Theoretical computer scientists do active research into approximation algorithms and parameterized algorithms, but the point is that the possibility of theoretical progress on NP-complete problems has not been disseminated and socialized among everyone who might benefit from that knowledge.
$endgroup$
The proof that a particular computational problem is NP-complete can cause people to stop trying to make theoretical progress on it, instead focusing all their attention on heuristics that have only empirical support. In fact, one can often continue to make theoretical progress by designing approximation algorithms with provable performance guarantees, or using concepts from parameterized complexity to devise algorithms that run in polynomial time when some parameter is fixed.
This is not quite like the other examples since it's not a single theorem, but a whole class of theorems, and part of the problem is that the community that is interested in hard computational problems is very large, spanning many scientific fields. Theoretical computer scientists do active research into approximation algorithms and parameterized algorithms, but the point is that the possibility of theoretical progress on NP-complete problems has not been disseminated and socialized among everyone who might benefit from that knowledge.
answered Apr 5 at 20:33
community wiki
Timothy Chow
add a comment |
add a comment |
$begingroup$
Like RBega2 I hesitate to say that this is definitely an example, but the paper "Natural Proofs" by Razborov and Rudich, which showed that certain kinds of proof techniques would be insufficient to prove $Pne NP$, said:
We do not conclude that researchers should give up on proving serious lower bounds. Quite to the contrary, by classifying a large number of techniques that are unable to do the job we hope to focus research in a more fruitful direction. Pessimism will only be warranted if a long period of time passes without the discovery of a non-naturalizing lower bound proof.
In practice, RR's result seems to have been taken as a strong negative result, and may have taken a lot of the wind out of the sails of research into lower bounds. However, I hesitate to claim that people misinterpreted their paper, since one can plausibly argue that the real reason for lack of progress post-RR was simply the difficulty of proving lower bounds.
$endgroup$
add a comment |
$begingroup$
Like RBega2 I hesitate to say that this is definitely an example, but the paper "Natural Proofs" by Razborov and Rudich, which showed that certain kinds of proof techniques would be insufficient to prove $Pne NP$, said:
We do not conclude that researchers should give up on proving serious lower bounds. Quite to the contrary, by classifying a large number of techniques that are unable to do the job we hope to focus research in a more fruitful direction. Pessimism will only be warranted if a long period of time passes without the discovery of a non-naturalizing lower bound proof.
In practice, RR's result seems to have been taken as a strong negative result, and may have taken a lot of the wind out of the sails of research into lower bounds. However, I hesitate to claim that people misinterpreted their paper, since one can plausibly argue that the real reason for lack of progress post-RR was simply the difficulty of proving lower bounds.
$endgroup$
add a comment |
$begingroup$
Like RBega2 I hesitate to say that this is definitely an example, but the paper "Natural Proofs" by Razborov and Rudich, which showed that certain kinds of proof techniques would be insufficient to prove $Pne NP$, said:
We do not conclude that researchers should give up on proving serious lower bounds. Quite to the contrary, by classifying a large number of techniques that are unable to do the job we hope to focus research in a more fruitful direction. Pessimism will only be warranted if a long period of time passes without the discovery of a non-naturalizing lower bound proof.
In practice, RR's result seems to have been taken as a strong negative result, and may have taken a lot of the wind out of the sails of research into lower bounds. However, I hesitate to claim that people misinterpreted their paper, since one can plausibly argue that the real reason for lack of progress post-RR was simply the difficulty of proving lower bounds.
$endgroup$
Like RBega2 I hesitate to say that this is definitely an example, but the paper "Natural Proofs" by Razborov and Rudich, which showed that certain kinds of proof techniques would be insufficient to prove $Pne NP$, said:
We do not conclude that researchers should give up on proving serious lower bounds. Quite to the contrary, by classifying a large number of techniques that are unable to do the job we hope to focus research in a more fruitful direction. Pessimism will only be warranted if a long period of time passes without the discovery of a non-naturalizing lower bound proof.
In practice, RR's result seems to have been taken as a strong negative result, and may have taken a lot of the wind out of the sails of research into lower bounds. However, I hesitate to claim that people misinterpreted their paper, since one can plausibly argue that the real reason for lack of progress post-RR was simply the difficulty of proving lower bounds.
answered Apr 5 at 20:16
community wiki
Timothy Chow
add a comment |
add a comment |
$begingroup$
Tverberg's theorem (https://en.wikipedia.org/wiki/Tverberg%27s_theorem) says that for any $d$ and $r$, in $d$ dimensional Euclidean space any set of $(d+1)(r-1)+1$ points can be partitioned into r subsets, such that the convex hulls of the subsets all contain a common point. It is a generalization of Radon's theorem.
The "topological Tverberg theorem" (really a conjecture) is the assertion that for any $d$ and $r$, and any continuous map $fcolon deltaDelta_{(d+1)(r-1)+1}to mathbb{R}^d$ from the boundary of the $(d+1)(r-1)+1$-dimensional simplex to $mathbb{R}^d$, there exist a collection $Delta^1,ldots,Delta^rsubseteqDelta_{(d+1)(r-1)+1}$ of complementary subsimplices such that $f(Delta^1)cap cdots cap f(Delta^r)neq varnothing$. (Tverberg theorem's theorem is the same but for $f$ linear.)
In [BSS] this was proven in the case of $r$ prime, and in the famous unpublished paper [M. Ozaydin, Equivariant maps for the symmetric group, http://digital.library.wisc.edu/1793/63829] it was proven for $r$ a power of a prime. These proofs use some pretty advanced tools from algebraic topology (equivariant cohomology, et cetera).
I don't know for sure, but my impression is that, at least for quite a while, people believed that the restriction to $r$ a power of a prime was just an artifact of the tools used, and that with more work the theorem could be proved for all $r$. But in 2015, F. Frick [https://arxiv.org/abs/1502.00947], building on work of Mabbilard and Wagner [https://arxiv.org/abs/1508.02349], showed that the conjecture is false for any $r$ not a power of a prime.
Barany, I.; Shlosman, S. B.; Szűcs, András, On a topological generalization of a theorem of Tverberg, J. Lond. Math. Soc., II. Ser. 23, 158-164 (1981). ZBL0453.55003.
$endgroup$
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
add a comment |
$begingroup$
Tverberg's theorem (https://en.wikipedia.org/wiki/Tverberg%27s_theorem) says that for any $d$ and $r$, in $d$ dimensional Euclidean space any set of $(d+1)(r-1)+1$ points can be partitioned into r subsets, such that the convex hulls of the subsets all contain a common point. It is a generalization of Radon's theorem.
The "topological Tverberg theorem" (really a conjecture) is the assertion that for any $d$ and $r$, and any continuous map $fcolon deltaDelta_{(d+1)(r-1)+1}to mathbb{R}^d$ from the boundary of the $(d+1)(r-1)+1$-dimensional simplex to $mathbb{R}^d$, there exist a collection $Delta^1,ldots,Delta^rsubseteqDelta_{(d+1)(r-1)+1}$ of complementary subsimplices such that $f(Delta^1)cap cdots cap f(Delta^r)neq varnothing$. (Tverberg theorem's theorem is the same but for $f$ linear.)
In [BSS] this was proven in the case of $r$ prime, and in the famous unpublished paper [M. Ozaydin, Equivariant maps for the symmetric group, http://digital.library.wisc.edu/1793/63829] it was proven for $r$ a power of a prime. These proofs use some pretty advanced tools from algebraic topology (equivariant cohomology, et cetera).
I don't know for sure, but my impression is that, at least for quite a while, people believed that the restriction to $r$ a power of a prime was just an artifact of the tools used, and that with more work the theorem could be proved for all $r$. But in 2015, F. Frick [https://arxiv.org/abs/1502.00947], building on work of Mabbilard and Wagner [https://arxiv.org/abs/1508.02349], showed that the conjecture is false for any $r$ not a power of a prime.
Barany, I.; Shlosman, S. B.; Szűcs, András, On a topological generalization of a theorem of Tverberg, J. Lond. Math. Soc., II. Ser. 23, 158-164 (1981). ZBL0453.55003.
$endgroup$
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
add a comment |
$begingroup$
Tverberg's theorem (https://en.wikipedia.org/wiki/Tverberg%27s_theorem) says that for any $d$ and $r$, in $d$ dimensional Euclidean space any set of $(d+1)(r-1)+1$ points can be partitioned into r subsets, such that the convex hulls of the subsets all contain a common point. It is a generalization of Radon's theorem.
The "topological Tverberg theorem" (really a conjecture) is the assertion that for any $d$ and $r$, and any continuous map $fcolon deltaDelta_{(d+1)(r-1)+1}to mathbb{R}^d$ from the boundary of the $(d+1)(r-1)+1$-dimensional simplex to $mathbb{R}^d$, there exist a collection $Delta^1,ldots,Delta^rsubseteqDelta_{(d+1)(r-1)+1}$ of complementary subsimplices such that $f(Delta^1)cap cdots cap f(Delta^r)neq varnothing$. (Tverberg theorem's theorem is the same but for $f$ linear.)
In [BSS] this was proven in the case of $r$ prime, and in the famous unpublished paper [M. Ozaydin, Equivariant maps for the symmetric group, http://digital.library.wisc.edu/1793/63829] it was proven for $r$ a power of a prime. These proofs use some pretty advanced tools from algebraic topology (equivariant cohomology, et cetera).
I don't know for sure, but my impression is that, at least for quite a while, people believed that the restriction to $r$ a power of a prime was just an artifact of the tools used, and that with more work the theorem could be proved for all $r$. But in 2015, F. Frick [https://arxiv.org/abs/1502.00947], building on work of Mabbilard and Wagner [https://arxiv.org/abs/1508.02349], showed that the conjecture is false for any $r$ not a power of a prime.
Barany, I.; Shlosman, S. B.; Szűcs, András, On a topological generalization of a theorem of Tverberg, J. Lond. Math. Soc., II. Ser. 23, 158-164 (1981). ZBL0453.55003.
$endgroup$
Tverberg's theorem (https://en.wikipedia.org/wiki/Tverberg%27s_theorem) says that for any $d$ and $r$, in $d$ dimensional Euclidean space any set of $(d+1)(r-1)+1$ points can be partitioned into r subsets, such that the convex hulls of the subsets all contain a common point. It is a generalization of Radon's theorem.
The "topological Tverberg theorem" (really a conjecture) is the assertion that for any $d$ and $r$, and any continuous map $fcolon deltaDelta_{(d+1)(r-1)+1}to mathbb{R}^d$ from the boundary of the $(d+1)(r-1)+1$-dimensional simplex to $mathbb{R}^d$, there exist a collection $Delta^1,ldots,Delta^rsubseteqDelta_{(d+1)(r-1)+1}$ of complementary subsimplices such that $f(Delta^1)cap cdots cap f(Delta^r)neq varnothing$. (Tverberg theorem's theorem is the same but for $f$ linear.)
In [BSS] this was proven in the case of $r$ prime, and in the famous unpublished paper [M. Ozaydin, Equivariant maps for the symmetric group, http://digital.library.wisc.edu/1793/63829] it was proven for $r$ a power of a prime. These proofs use some pretty advanced tools from algebraic topology (equivariant cohomology, et cetera).
I don't know for sure, but my impression is that, at least for quite a while, people believed that the restriction to $r$ a power of a prime was just an artifact of the tools used, and that with more work the theorem could be proved for all $r$. But in 2015, F. Frick [https://arxiv.org/abs/1502.00947], building on work of Mabbilard and Wagner [https://arxiv.org/abs/1508.02349], showed that the conjecture is false for any $r$ not a power of a prime.
Barany, I.; Shlosman, S. B.; Szűcs, András, On a topological generalization of a theorem of Tverberg, J. Lond. Math. Soc., II. Ser. 23, 158-164 (1981). ZBL0453.55003.
edited Apr 5 at 20:52
community wiki
Sam Hopkins
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
add a comment |
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
4
4
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
$begingroup$
I don't see how this is an example of the theorem impeding progress - it seems like more of a case where the theorem didn't cause as much progress as it should have. Wouldn't it be even harder to come up with the counterexample for non-prime-powers if the theorem didn't exist to focus attention on that case?
$endgroup$
– Will Sawin
Apr 5 at 21:54
4
4
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
$begingroup$
@WillSawin My interpretation of this answer is that the theorems impeded progress not by making the counterexample harder to find but by dissuading people from even looking for a counterexample.
$endgroup$
– Andreas Blass
Apr 6 at 0:11
add a comment |
$begingroup$
In 1991, Wald and Iyer showed that there exist foliations of the Schwarzschild spacetime that do not contain apparent horizons, yet get arbitrarily close to the singularity (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.44.R3719). This led many people to think that apparent horizons are unreliable tools to detect gravitational collapse, that they are inferior to event horizons, and interest into them vaned.
Starting in 2005, numerical results demonstrated that apparent horizons are actually quite reliable and useful in practice (https://link.springer.com/article/10.12942/lrr-2007-3), prompting further theoretical studies. Today we know that generalizations of apparent horizons still exist in these foliations, that they form smooth world tubes, and that e.g. black hole thermodynamics can be extended to such world tubes (called "dynamical horizons", https://en.wikipedia.org/wiki/Dynamical_horizon).
$endgroup$
add a comment |
$begingroup$
In 1991, Wald and Iyer showed that there exist foliations of the Schwarzschild spacetime that do not contain apparent horizons, yet get arbitrarily close to the singularity (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.44.R3719). This led many people to think that apparent horizons are unreliable tools to detect gravitational collapse, that they are inferior to event horizons, and interest into them vaned.
Starting in 2005, numerical results demonstrated that apparent horizons are actually quite reliable and useful in practice (https://link.springer.com/article/10.12942/lrr-2007-3), prompting further theoretical studies. Today we know that generalizations of apparent horizons still exist in these foliations, that they form smooth world tubes, and that e.g. black hole thermodynamics can be extended to such world tubes (called "dynamical horizons", https://en.wikipedia.org/wiki/Dynamical_horizon).
$endgroup$
add a comment |
$begingroup$
In 1991, Wald and Iyer showed that there exist foliations of the Schwarzschild spacetime that do not contain apparent horizons, yet get arbitrarily close to the singularity (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.44.R3719). This led many people to think that apparent horizons are unreliable tools to detect gravitational collapse, that they are inferior to event horizons, and interest into them vaned.
Starting in 2005, numerical results demonstrated that apparent horizons are actually quite reliable and useful in practice (https://link.springer.com/article/10.12942/lrr-2007-3), prompting further theoretical studies. Today we know that generalizations of apparent horizons still exist in these foliations, that they form smooth world tubes, and that e.g. black hole thermodynamics can be extended to such world tubes (called "dynamical horizons", https://en.wikipedia.org/wiki/Dynamical_horizon).
$endgroup$
In 1991, Wald and Iyer showed that there exist foliations of the Schwarzschild spacetime that do not contain apparent horizons, yet get arbitrarily close to the singularity (https://journals.aps.org/prd/abstract/10.1103/PhysRevD.44.R3719). This led many people to think that apparent horizons are unreliable tools to detect gravitational collapse, that they are inferior to event horizons, and interest into them vaned.
Starting in 2005, numerical results demonstrated that apparent horizons are actually quite reliable and useful in practice (https://link.springer.com/article/10.12942/lrr-2007-3), prompting further theoretical studies. Today we know that generalizations of apparent horizons still exist in these foliations, that they form smooth world tubes, and that e.g. black hole thermodynamics can be extended to such world tubes (called "dynamical horizons", https://en.wikipedia.org/wiki/Dynamical_horizon).
answered Apr 5 at 23:36
community wiki
Erik Schnetter
add a comment |
add a comment |
$begingroup$
This is an example taken from physics, but I think it fits your question pretty well.
In his 1932 book Mathematische Grundlagen der Quantenmechanik Von Neumann presented a "proof" that was widely (and erroneously) believed to prove that all hidden-variable theories are impossible. More on this can be found here: https://arxiv.org/abs/1006.0499. This was supposed to seal the deal in favor of Bohr between the Einstein-Bohr controversy so to speak. I believe it is fair to say that for a long time such foundational problems fell out of the mainstream interests of physics and only outsiders, so to speak, studied the problem.
The "proof" was in fact wrong in a sense, and Bohm (in 1952) was able to come up with his pilot-wave theory that has the same predictive power as quantum mechanics but it's basically classical. It is in fact a hidden variable theory.
The resolution of all this came more than 30 years later, in 1965, when Bell showed that quantum mechanics does not satisfies local realism.
$endgroup$
add a comment |
$begingroup$
This is an example taken from physics, but I think it fits your question pretty well.
In his 1932 book Mathematische Grundlagen der Quantenmechanik Von Neumann presented a "proof" that was widely (and erroneously) believed to prove that all hidden-variable theories are impossible. More on this can be found here: https://arxiv.org/abs/1006.0499. This was supposed to seal the deal in favor of Bohr between the Einstein-Bohr controversy so to speak. I believe it is fair to say that for a long time such foundational problems fell out of the mainstream interests of physics and only outsiders, so to speak, studied the problem.
The "proof" was in fact wrong in a sense, and Bohm (in 1952) was able to come up with his pilot-wave theory that has the same predictive power as quantum mechanics but it's basically classical. It is in fact a hidden variable theory.
The resolution of all this came more than 30 years later, in 1965, when Bell showed that quantum mechanics does not satisfies local realism.
$endgroup$
add a comment |
$begingroup$
This is an example taken from physics, but I think it fits your question pretty well.
In his 1932 book Mathematische Grundlagen der Quantenmechanik Von Neumann presented a "proof" that was widely (and erroneously) believed to prove that all hidden-variable theories are impossible. More on this can be found here: https://arxiv.org/abs/1006.0499. This was supposed to seal the deal in favor of Bohr between the Einstein-Bohr controversy so to speak. I believe it is fair to say that for a long time such foundational problems fell out of the mainstream interests of physics and only outsiders, so to speak, studied the problem.
The "proof" was in fact wrong in a sense, and Bohm (in 1952) was able to come up with his pilot-wave theory that has the same predictive power as quantum mechanics but it's basically classical. It is in fact a hidden variable theory.
The resolution of all this came more than 30 years later, in 1965, when Bell showed that quantum mechanics does not satisfies local realism.
$endgroup$
This is an example taken from physics, but I think it fits your question pretty well.
In his 1932 book Mathematische Grundlagen der Quantenmechanik Von Neumann presented a "proof" that was widely (and erroneously) believed to prove that all hidden-variable theories are impossible. More on this can be found here: https://arxiv.org/abs/1006.0499. This was supposed to seal the deal in favor of Bohr between the Einstein-Bohr controversy so to speak. I believe it is fair to say that for a long time such foundational problems fell out of the mainstream interests of physics and only outsiders, so to speak, studied the problem.
The "proof" was in fact wrong in a sense, and Bohm (in 1952) was able to come up with his pilot-wave theory that has the same predictive power as quantum mechanics but it's basically classical. It is in fact a hidden variable theory.
The resolution of all this came more than 30 years later, in 1965, when Bell showed that quantum mechanics does not satisfies local realism.
answered Apr 7 at 22:18
community wiki
lcv
add a comment |
add a comment |
$begingroup$
The Mermin-Wagner theorem (1966) proved that for two-dimensional models with a continuous symmetry there was no finite temperature transition to a phase with long-range order via spontaneous breaking of this symmetry.
My understanding is that this was generally interpreted as proving that "two-dimensional models with continuous symmetries do not have phase transitions". I do not know for sure that this impeded progress, but I suspect that as it appeared to be apparently a no-go theorem this would have prevented researchers from looking for examples of phase transitions.
Remarkably, not too long after the theorem was published, Kosterlitz and Thouless (1973) showed that the two-dimensional XY model does indeed have a phase transition with diverging correlation length, but no long-range order. The mechanism was completely different (due to topological effects), and so managed to avoid the apparently hard constraints imposed by the theorem.
$endgroup$
add a comment |
$begingroup$
The Mermin-Wagner theorem (1966) proved that for two-dimensional models with a continuous symmetry there was no finite temperature transition to a phase with long-range order via spontaneous breaking of this symmetry.
My understanding is that this was generally interpreted as proving that "two-dimensional models with continuous symmetries do not have phase transitions". I do not know for sure that this impeded progress, but I suspect that as it appeared to be apparently a no-go theorem this would have prevented researchers from looking for examples of phase transitions.
Remarkably, not too long after the theorem was published, Kosterlitz and Thouless (1973) showed that the two-dimensional XY model does indeed have a phase transition with diverging correlation length, but no long-range order. The mechanism was completely different (due to topological effects), and so managed to avoid the apparently hard constraints imposed by the theorem.
$endgroup$
add a comment |
$begingroup$
The Mermin-Wagner theorem (1966) proved that for two-dimensional models with a continuous symmetry there was no finite temperature transition to a phase with long-range order via spontaneous breaking of this symmetry.
My understanding is that this was generally interpreted as proving that "two-dimensional models with continuous symmetries do not have phase transitions". I do not know for sure that this impeded progress, but I suspect that as it appeared to be apparently a no-go theorem this would have prevented researchers from looking for examples of phase transitions.
Remarkably, not too long after the theorem was published, Kosterlitz and Thouless (1973) showed that the two-dimensional XY model does indeed have a phase transition with diverging correlation length, but no long-range order. The mechanism was completely different (due to topological effects), and so managed to avoid the apparently hard constraints imposed by the theorem.
$endgroup$
The Mermin-Wagner theorem (1966) proved that for two-dimensional models with a continuous symmetry there was no finite temperature transition to a phase with long-range order via spontaneous breaking of this symmetry.
My understanding is that this was generally interpreted as proving that "two-dimensional models with continuous symmetries do not have phase transitions". I do not know for sure that this impeded progress, but I suspect that as it appeared to be apparently a no-go theorem this would have prevented researchers from looking for examples of phase transitions.
Remarkably, not too long after the theorem was published, Kosterlitz and Thouless (1973) showed that the two-dimensional XY model does indeed have a phase transition with diverging correlation length, but no long-range order. The mechanism was completely different (due to topological effects), and so managed to avoid the apparently hard constraints imposed by the theorem.
answered 2 days ago
community wiki
Nathan Clisby
add a comment |
add a comment |
$begingroup$
I am by no means an expert on the subject but I've been told that Hilbert's work on invariant theory pretty much brought the subject to a halt for quite some time.
$endgroup$
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
add a comment |
$begingroup$
I am by no means an expert on the subject but I've been told that Hilbert's work on invariant theory pretty much brought the subject to a halt for quite some time.
$endgroup$
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
add a comment |
$begingroup$
I am by no means an expert on the subject but I've been told that Hilbert's work on invariant theory pretty much brought the subject to a halt for quite some time.
$endgroup$
I am by no means an expert on the subject but I've been told that Hilbert's work on invariant theory pretty much brought the subject to a halt for quite some time.
answered Apr 5 at 21:11
community wiki
Fernando Martin
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
add a comment |
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
3
3
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
$begingroup$
See Dima's answer, which preceded yours by just a few hours.
$endgroup$
– Gerry Myerson
Apr 5 at 21:46
add a comment |
$begingroup$
The Abel-Ruffini Theorem could have been a "dead-end" theorem, as could the fundamental theorem of Galois theory. Similarly, Godel's Incompleteness Theorem(s) and perhaps even his Completeness Theorem could be seen as dead-end theorems.
Many such theorems would (at first sight) appear to be the "last chapter" in a certain "book" which began with a question and thus ended when that question was answered (either positively or negatively).
However, in many cases, a closer examination led to deeper questions and answers. Galois' work, and Godel's as well, had the additional problem of being difficult to understand for the average mathematician of their times. The "in a nutshell" summary that the latter got must have only accentuated the feeling that these topics had been closed.
$endgroup$
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
|
show 1 more comment
$begingroup$
The Abel-Ruffini Theorem could have been a "dead-end" theorem, as could the fundamental theorem of Galois theory. Similarly, Godel's Incompleteness Theorem(s) and perhaps even his Completeness Theorem could be seen as dead-end theorems.
Many such theorems would (at first sight) appear to be the "last chapter" in a certain "book" which began with a question and thus ended when that question was answered (either positively or negatively).
However, in many cases, a closer examination led to deeper questions and answers. Galois' work, and Godel's as well, had the additional problem of being difficult to understand for the average mathematician of their times. The "in a nutshell" summary that the latter got must have only accentuated the feeling that these topics had been closed.
$endgroup$
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
|
show 1 more comment
$begingroup$
The Abel-Ruffini Theorem could have been a "dead-end" theorem, as could the fundamental theorem of Galois theory. Similarly, Godel's Incompleteness Theorem(s) and perhaps even his Completeness Theorem could be seen as dead-end theorems.
Many such theorems would (at first sight) appear to be the "last chapter" in a certain "book" which began with a question and thus ended when that question was answered (either positively or negatively).
However, in many cases, a closer examination led to deeper questions and answers. Galois' work, and Godel's as well, had the additional problem of being difficult to understand for the average mathematician of their times. The "in a nutshell" summary that the latter got must have only accentuated the feeling that these topics had been closed.
$endgroup$
The Abel-Ruffini Theorem could have been a "dead-end" theorem, as could the fundamental theorem of Galois theory. Similarly, Godel's Incompleteness Theorem(s) and perhaps even his Completeness Theorem could be seen as dead-end theorems.
Many such theorems would (at first sight) appear to be the "last chapter" in a certain "book" which began with a question and thus ended when that question was answered (either positively or negatively).
However, in many cases, a closer examination led to deeper questions and answers. Galois' work, and Godel's as well, had the additional problem of being difficult to understand for the average mathematician of their times. The "in a nutshell" summary that the latter got must have only accentuated the feeling that these topics had been closed.
answered Apr 7 at 2:07
community wiki
Kapil
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
|
show 1 more comment
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
6
6
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I think the inclusion of Goedel's results is absurdly off-base.
$endgroup$
– Andrés E. Caicedo
Apr 7 at 4:58
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
I agree with @AndrésE.Caicedo - especially re: the completeness theorem: what exactly would that be a dead-end for? On the contrary, it opened things up by showing how proof theory and model theory could be applied to each other. (I could see an argument for the incompleteness theorem being a dead-end theorem in an alternate universe, on the other hand, although even that I'm skeptical of: it dead-ended a program, but that's not the same thing as dead-ending a subject.)
$endgroup$
– Noah Schweber
Apr 7 at 14:45
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@Noah Because of the incompleteness theorem we have provability logic, degrees of interpretability, the consistency strength hierarchy,...
$endgroup$
– Andrés E. Caicedo
Apr 7 at 14:59
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
@AndrésE.Caicedo Of course I know that - I'm saying that I could imagine an alternate universe in which the knee-jerk response to GIT was to dead-end a lot of logic - even (although this is a stretch) to say, "OK, logic doesn't really provide a satisfying way to approach mathematics." I'm not denying (clearly) that GIT opened up new directions in our history, but I could imagine a situation where the reverse happened. (I could imagine the same thing for Lowenheim-Skolem, for what it's worth.) By contrast I really can't imagine how the completeness theorem would wind up being a dead-endifier.
$endgroup$
– Noah Schweber
Apr 7 at 15:48
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
$begingroup$
Paul Cohen describes a type of "decision procedures" that he believed he could build up inductively to resolve all mathematical assertions; only later did he learn of Godel's work and discussed the incompleteness theorems (with Kleene) before abandoning that line of thinking for a time. But Cohen goes on to say this inchoate idea of decision procedures re-arose when he developed forcing to prove the independence of CH. So: Maybe Godel's Incompleteness Theorems impeded others' progress (Godel's, included!) on developing something like forcing...
$endgroup$
– Benjamin Dickman
Apr 7 at 22:24
|
show 1 more comment
protected by Community♦ Apr 6 at 3:49
Thank you for your interest in this question.
Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?
3
$begingroup$
I remember reading, I believe in some other MO post, about how whereas Donaldson's work on smooth 4 manifolds launched a vibrant program of research with invariants coming from physics, Freedman's contemporaneous work on topological 4 manifolds essentially ended the study of topological 4 manifolds. But maybe that's not what you mean by "impeded progress"
$endgroup$
– Sam Hopkins
Apr 4 at 23:38
7
$begingroup$
@SamHopkins: I am seeking more misleading impeding, as opposed to closing off a line of investigation. Certainly when a line has terminated, that's it. But there are also misleading endings, which are not terminations afterall.
$endgroup$
– Joseph O'Rourke
Apr 4 at 23:51
8
$begingroup$
This comment is me thinking out loud about the mechanism by which a theorem might impede or spur progress. I think we carry around beliefs about the likelihood that unproven theorems are true or false, and beliefs about the difficulty of achieving proofs of those theorems. When the truth or falsehood of a theorem becomes known, then one updates one's beliefs about those theorems that remain unproven. So to spur or impede progress, a new theorem should dramatically bias those estimates, thereby causing time/energy to be wasted. (I am not at all certain that I am correct here.)
$endgroup$
– Neal
Apr 5 at 2:38
2
$begingroup$
Einstein was supposedly upset when Godel showed the existence of closed time loops in G.R. and I remember hearing something to the effect that he lost faith in the theory afterword, I suppose that while useful, he did not think it the real description of the universe. I don't know that he did anymore work in that area. Like wise with Einstein and quantum mechanics, E.P.R. , classical theories in small scale, Bell's theorem/inequalities, also Hilberts Nulinshats (however spelled) and Godel's incompleteness theorems.
$endgroup$
– marshal craft
Apr 5 at 5:58
6
$begingroup$
I am unable to answer, so per comment. The works by Kurt Hornik, especially the one in Neural Networks 1991 helped put multilayered networks to sleep for over a decade because it showed that one hidden layer is "enough". Multiple hidden layers were hardly considered after that and people were actively discouraged from using more than one hidden layer (at least my experience). Other methods took over, e.g. support vector machines/Gaussian Processes etc, until essentially Hinton's reactivation of the field in 2006 via his work autoassociators.
$endgroup$
– Captain Emacs
Apr 6 at 12:30