How to catch proof errors during self study?
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
New contributor
|
show 5 more comments
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
New contributor
2
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
1
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
1
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago
|
show 5 more comments
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
New contributor
I completed a Bachelor's in Mathematics May 2018 with a 3.6 major GPA. I had trouble with real analysis, scoring B-, B, B, B+ in the four courses I took on the subject despite significant effort and paying for a PhD Candidate tutor.
My goal is to go to graduate school in Machine Learning. I want to learn as much theory as I can on my own while working to afford Graduate School. Over the next ~6 years years, I want to self study up to 12 graduate level topics related to Probability / Point Estimation / Optimization or Control / Dynamics and Statistic Learning. As much as I can finish in ~6 years.
I will also schedule time over those ~6 years to work on programming at least 6 non-trivial personal projects in Machine Learning and on replicating one peer-reviewed academic paper every 1 - 2 months. After that, I'll crack open the Deep Learning Book and Reinforcement Learning Book over another year to study them thoroughly using my experience and studied theory while applying to graduate schools.
The answer here (https://www.quora.com/How-can-I-self-study-functional-analysis) raises an important point:
"if you want to understand it [Functional Analysis] in depth, you have to solve problems, which usually means proving stuff (as opposed to calculating stuff), and that's pretty hard for anyone to self-critique.
You'll want someone to help you out of tight spots as you're reading the text, and look over your solutions to see if you're actually getting it. It's not too hard to delude yourself into thinking that you've proved something while in fact you did not. If you miss a subtlety or fail to understand a definition, you might be proving the wrong thing or nothing at all - and you may have no way of even realizing that."
Partial progress is still amazing. However, what proactive stategies avoid falling into these pitfalls? I'm not always an A student, and I want to avoid spending more than 8 months average per subject. Do I repeatedly post every problem I attempt to prove onto Stack Exchange for advice / correctness, and do I occasionally contact a professor from my Alma Mater when I am really stuck?
soft-question self-learning
soft-question self-learning
New contributor
New contributor
edited 2 hours ago
mrtaurho
3,72121133
3,72121133
New contributor
asked 2 hours ago
Kalkirin
312
312
New contributor
New contributor
2
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
1
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
1
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago
|
show 5 more comments
2
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
1
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
1
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago
2
2
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
1
1
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
1
1
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago
|
show 5 more comments
2 Answers
2
active
oldest
votes
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
add a comment |
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
|
show 7 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "69"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Kalkirin is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059014%2fhow-to-catch-proof-errors-during-self-study%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
add a comment |
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
add a comment |
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
You can post some (not all) proofs here with the proof-verification tag. It would be helpful if you flagged the few particular places where you were in doubt.
If an old professor is willing to spend occasional time, go for it.
One suggestion. Rather than learning the basics from the bottom up, start with something you really want to know for its own sake and work backwards through the prerequisites as necessary. You will probably discover that you need a lot more linear algebra than you thought, and a lot less functional analysis.
Finally, six years is a long time to study all alone. Good grad schools do support their students. Consider applying sooner.
answered 2 hours ago
Ethan Bolker
41.4k547108
41.4k547108
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
add a comment |
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
1
1
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
Your suggestion about working backwards through the prerequisites has helped me learn one or two things in physics. I usually try to find homework problems with their solutions online. It's very slow though. I think that it's safe to say that every great mathematician has had great teachers and professors. The Jacobis, Euler, Gauss, Lagrange, Dedekind, Hilbert, etc.
– stressed out
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@stressedout What about Galois?
– John Douma
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
@JohnDouma Yeah, I already had Galois in mind, but then again, Galois never found widespread recognition for his work before his death. Probably because he had never learned mathematics formally. And as far as my memory from math history books tells me, his work was never understood well until Emil Artin rewrote his entire work in terms of vector spaces and modern algebra. Even the French committee of mathematicians didn't understand his submitted work if I'm not mistaken. Another example is Ramanujan before he left India for the UK. And I meant the Bernoullis. Not the Jacobis, for sure. :D
– stressed out
2 hours ago
add a comment |
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
|
show 7 more comments
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
|
show 7 more comments
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
To answer your first question, about how you can catch errors during self-study, I think that you need to have others check your proofs. There have been numerous alleged proofs in the history of mathematics by well-known mathematicians that were later demonstrated to be insufficient or wrong. So, I think you need to find a community of researchers, online or not, to exchange your ideas with them.
As a matter of fact, these days I talk to many people about my future B.Sc. thesis which is going to be about machine learning. What I'm going to write is something that has been said to me by my professors and students studying at higher levels, and I don't claim that it's the best possible approach. So, please keep that in mind.
I think the starting point is to get a copy of the book Elements of Statistical Learning by Hastie and Tibshirani. As a more advanced text to supplement it, you can use Pattern Recognition and Machine Learning by C. Bishop. I think you already know this or probably have even better suggestions for this part.
After reading these two books, you can read the book that Ian Goodfellow, Yoshua Bengio and Aaron Courville have written about deep learning with the same name: Deep Learning.
Once you start reading the book, you will be surprised to see how little you need to know to read through the chapters.
You need to take a course in Stochastic Processes. Now, engineering students take this course too. If you can, take this course from the engineering department because they usually avoid measure theory and depending on the lecturers, you may learn some things about signals and systems during the course.
If you want to take the rigorous path, you will need to learn measure theory first. Then you'll be able to understand stochastic calculus rigorously. Last semester, I took a course in stochastic processes from the computer engineering department. You will be surprised to know that most computer engineers know little about the rigorous treatment of the stuff they work with everyday. A book that engineers use for a more or less mathematical treatment is Gallagher's Stochastic Processes which is a terrible book in my opinion. It doesn't satisfy mathematicians, neither does it explain the beautiful intuitions that sometimes engineering offers.
One advantage to the rigorous path is that you get to learn about some other fields like financial mathematics as well. The rigorous approach is helpful when you want to define things like conditional expectation and Radon-Nikodym derivative. But after all, I think it's not wise to spend too much time on 'abstractions'.
You need to spend a lot of time on programming. Learn Python or R, preferably. You need to learn about Markov chain Monte Carlo methods. You also may need to learn about calculus of variation at some point. Overall, the list of things that you can learn is endless. You may like to learn differential geometry to understand information geometry which is more theoretical than practical. Also, some knowledge from physics like thermodynamics can be helpful when you study things like the Boltzmann machine, etc. Again, I would like to emphasize that many of the recent advances in neural networks and deep learning do not really require advanced (abstract) mathematics. Just some linear algebra, a good understanding of probability theory, some experience with matrix calculations as in The Matrix Cookbook and some creativity that engineers have is enough to start your journey. Once you have started and you have chosen your final destination, you will acquire the knowledge you need along the way.
edited 59 mins ago
answered 1 hour ago
stressed out
3,8471532
3,8471532
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
|
show 7 more comments
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Thank you again. If I had to create a minimal list of topics to cover it would be: 1) Introduction to Statistical Learning, 2) Applied Predictive Models, 3) Some Statistical Inference, 4) Measure Theory, 5) Complex Analysis, 6) Topology, 7) Modern Probability Theory, 8) PDEs, 9) Stochastic Calculus, 10) Linear Algebra, 11) Bishop Pattern Recognition, 12) Bayesian Decision Analysis, 13) Elements of Statistical Learning, 14) Deep Learning Book, 15) Reinforcement Learning Book. Do you think this can be cut down even further? I might still be biased.
– Kalkirin
53 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Well, many of those topics you mentioned have enormous overlaps. I don't think you need topology. The amount of topology one learns when learning analysis is enough. I don't think learning about non-Hausdorff spaces would be useful in future, but I can be wrong. It's not easy to learn stochastic calculus because it's too broad. For example, there is Ito's calculus with many results like Feynman-Kac formula, etc. There's also Malliavin's calculus. I think a fair amount of measure theory, complex analysis, linear algebra and PDEs should provide you with a reasonable mathematical background.
– stressed out
46 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
Thank you so much! I'll rework a budget of my available time per day with my job and a budget of my growing savings when I can comfortably go to graduate school. I'll use that to decide if I can add any other subjects onto this minimal list. I'll follow your recommendation to let Topology for Analysis go for now. However, I think it could be useful for certain projects in Machine Vision where you try to classify objects by similar topology through depth perception and 3D model prediction as one layer of an analysis. What do you think of that?
– Kalkirin
35 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
@Kalkirin Well, I'm afraid most of the topics you mention are so deep that if you want to study them up to a level that helps you understand research stuff, you will never be able to study for a PhD. If you want to study topology, a natural thing to do is to study algebraic topology because Betti numbers and cohomology are computational while topology itself is not very computational and therefore, it's hard to implement it in computers. If you want to learn about computer vision, you need to learn about projective geometry and the camera model more than topology. Are you a CS student?
– stressed out
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
Also, Topological Data Analysis using Algebraic Topology is still in its infancy, but have you heard of uses of it in machine learning yet? Here's a potentially good article showing how TDA and Machine Learning Together are greater than the sum of their parts (kdnuggets.com/2015/09/…)
– Kalkirin
27 mins ago
|
show 7 more comments
Kalkirin is a new contributor. Be nice, and check out our Code of Conduct.
Kalkirin is a new contributor. Be nice, and check out our Code of Conduct.
Kalkirin is a new contributor. Be nice, and check out our Code of Conduct.
Kalkirin is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Mathematics Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fmath.stackexchange.com%2fquestions%2f3059014%2fhow-to-catch-proof-errors-during-self-study%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
2
May I know why you want to learn functional analysis? I'm also on the same boat as you. I'm a third year undergrad student and I'm determined to improve my problem solving skills before I go for a PhD in data science and machine learning. But as far as I know, functional analysis doesn't really have much to do with machine learning. So, why do you want to learn it? It's more useful for people who want to study mathematical physics or PDEs, I think. But I'm not an expert. I'm just asking.
– stressed out
2 hours ago
Hey Stressed Out, sorry for the wait. I went on a walk. The functional analysis was for this book: Foundations of Modern Probability by Olav Kallenberg. It says you should have experience with functional analysis, complex variables and topology. The text covers stuff like measure theory, distributions, random sequence, characteristic function + CLT, conditioning, Martingales, Markov Processes, Random Walks, Ergodic Theory, Poisson Processes, Gaussian / Brownian Motion, Embeddings, Convergence Theorems, Stochastic Calculus. Func. Anal. also has ties with Control Theory which I want to study.
– Kalkirin
1 hour ago
1
Thanks for the answer. I know nothing about control theory and ergodic theory, but the other things you mentioned require mostly measure theory I think. I say that because I had a course in stochastic differential equations (which I passed with a C- :P). I think you might want to take a course in PDEs more than functional analysis. Read a PDE book like Evans. I hope one day I can read it. I suggest you to ask a separate question about what prerequisites you need to have to study machine learning before you study a difficult and time-consuming subject like functional analysis.
– stressed out
1 hour ago
Here are the complete list of topics I want to study (ideally): Measure Theory, Complex Analysis, Fourier Series, Topology for Analysis, Functional Analysis, Found. of Modern Probability, Nonlinear Dynamical Systems, Point Estimation, Hypothesis Testing, Statistical Learning, Pattern Recognition, Differential Geometry, Nonlinear Dimensional Reduction, Matrix Computations (Applied Lin Algebra), Linear/Nonlinear Programming, and Information Theory. After that it's the Deep Learning book and Reinforcement Learning. Then onto signal processing, control theory and feature engineering/Selection.
– Kalkirin
1 hour ago
1
Well, I hadn't read your comment when I wrote the answer. I have heard from people that information geometry is more or less theoretical. Many top engineers have never studied it. You can find Goodfellow's article about GANs on arxiv. It doesn't use information geometry. Does it? I mean, mathematicians can develop many impractical tools, but the ones that usually make it into the engineering world use little math.
– stressed out
1 hour ago