What are the purposes of autoencoders?
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
add a comment |
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02
add a comment |
$begingroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
$endgroup$
Autoencoders are neural networks that learn a compressed representation of the input in order to later reconstruct it, so they can be used for dimensionality reduction. They are composed of an encoder and a decoder (which can be separate neural networks). Dimensionality reduction can be useful in order to deal with or attenuate the issues related to the curse of dimensionality, where data becomes sparse and it is more difficult to obtain "statistical significance". So, autoencoders (and algorithms like PCA) can be used to deal with the curse of dimensionality.
Why do we care about dimensionality reduction specifically using autoencoders? Why can't we simply use PCA, if the purpose is dimensionality reduction?
Why do we need to decompress the latent representation of the input if we just want to perform dimensionality reduction, or why do we need the decoder part in an autoencoder? What are the use cases? In general, why do we need to compress the input to later decompress it? Wouldn't it be better to just use the original input (to start with)?
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
machine-learning autoencoders dimensionality-reduction curse-of-dimensionality
asked Mar 23 at 15:53
nbronbro
1,945624
1,945624
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02
add a comment |
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02
1
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02
add a comment |
3 Answers
3
active
oldest
votes
$begingroup$
It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
New contributor
$endgroup$
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "658"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
3 Answers
3
active
oldest
votes
3 Answers
3
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
New contributor
$endgroup$
add a comment |
$begingroup$
It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
New contributor
$endgroup$
add a comment |
$begingroup$
It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
New contributor
$endgroup$
It is important to think about what sort of patterns in the data are being represented.
Suppose that you have a dataset of greyscale images, such that every image is a uniform intensity. As a human brain you'd realise that every element in this dataset can be described in terms of a single numeric parameter, which is that intensity value. This is something that PCA would work fine for, because each of the dimensions (we can think of each pixel as a different dimension) is perfectly linearly correlated.
Suppose instead that you have a dataset of black and white 128x128px bitmap images of centred circles. As a human brain you'd quickly realise that every element in this dataset can be fully described by a single numeric parameter, which is the radius of the circle. That is a very impressive level of reduction from 16384 binary dimensions, and perhaps more importantly it's a semantically meaningful property of the data. However, PCA probably won't be able to find that pattern.
Your question was "Why can't we simply use PCA, if the purpose is dimensionality reduction?" The simple answer is that PCA is the simplest tool for dimensionality reduction, but it can miss a lot of relationships that more powerful techniques such as autoencoders might find.
New contributor
New contributor
answered Mar 24 at 0:26
JosiahJosiah
1312
1312
New contributor
New contributor
add a comment |
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
add a comment |
$begingroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
$endgroup$
PCA is a linear method that creates a transformation that is capable of changing the vectors projections (changing axis)
Since PCA looks for the direction of maximum variance it usually have high discriminativity BUT it does not guaranteed that the direction of most variance is the direction of most discriminativity.
LDA is a linear method that creates a transformation that is capable of finding the direction that is most relevant to decide if a vector belong to class A or B.
PCA and LDA have non-linear Kernel versions that might overcome their linear limitations.
Autoencoders can perform dimensionality reduction with other kinds of loss function, can be non-linear and might perform better than PCA and LDA for a lot of cases.
There is probably no best machine learning algorithm to do anything, sometimes Deep Learning and Neural Nets are overkill for simple problems and PCA and LDA might be tried before other, more complex, dimensionality reductions.
New contributor
New contributor
answered Mar 23 at 21:29
Pedro Henrique MonfortePedro Henrique Monforte
714
714
New contributor
New contributor
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
add a comment |
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
What does LDA have to do with question?
$endgroup$
– nbro
Mar 23 at 21:45
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
$begingroup$
LDA can be used as dimensionality reduction. The original algorithm derives only one projection but you can use it to get lower ranking discriminative direction for more acurate modelling
$endgroup$
– Pedro Henrique Monforte
Mar 23 at 21:46
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
add a comment |
$begingroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
$endgroup$
A use case of autoencoders (in particular, of the decoder or generative model of the autoencoder) is to denoise the input. This type of autoencoders, called denoising autoencoders, take a partially corrupted input and they attempt to reconstruct the corresponding uncorrupted input. There are several applications of this model. For example, if you had a corrupted image, you could potentially recover the uncorrupted one using a denoising autoencoder.
Autoencoders and PCA are related:
an autoencoder with a single fully-connected hidden layer, a linear activation function and a squared error cost function trains weights that span the same subspace as the one spanned by the principal component loading vectors, but that they are not identical to the loading vectors.
For more info, have a look at the paper From Principal Subspaces to Principal Components with Linear Autoencoders (2018), by Elad Plaut. See also this answer, which also explains the relation between PCA and autoencoders.
edited Mar 23 at 15:59
answered Mar 23 at 15:53
nbronbro
1,945624
1,945624
add a comment |
add a comment |
Thanks for contributing an answer to Artificial Intelligence Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fai.stackexchange.com%2fquestions%2f11405%2fwhat-are-the-purposes-of-autoencoders%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
1
$begingroup$
See also the following question stats.stackexchange.com/q/82416/82135 on CrossValidated SE.
$endgroup$
– nbro
Mar 23 at 16:02