What is “posterior collapse” phenomenon?
$begingroup$
I was going through this paper on Towards Text Generation with Adversarially Learned
Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states
the model ends up
relying solely on the auto-regressive properties of the decoder while ignoring the latent variables,
which become uninformative.
please simplify and explain the problem in a lucid way.
python deep-learning autoencoder vae
$endgroup$
add a comment |
$begingroup$
I was going through this paper on Towards Text Generation with Adversarially Learned
Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states
the model ends up
relying solely on the auto-regressive properties of the decoder while ignoring the latent variables,
which become uninformative.
please simplify and explain the problem in a lucid way.
python deep-learning autoencoder vae
$endgroup$
add a comment |
$begingroup$
I was going through this paper on Towards Text Generation with Adversarially Learned
Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states
the model ends up
relying solely on the auto-regressive properties of the decoder while ignoring the latent variables,
which become uninformative.
please simplify and explain the problem in a lucid way.
python deep-learning autoencoder vae
$endgroup$
I was going through this paper on Towards Text Generation with Adversarially Learned
Neural Outlines and it states why the VAEs are hard to train for text generation due to this problem. The paper states
the model ends up
relying solely on the auto-regressive properties of the decoder while ignoring the latent variables,
which become uninformative.
please simplify and explain the problem in a lucid way.
python deep-learning autoencoder vae
python deep-learning autoencoder vae
asked yesterday
thanatozthanatoz
589320
589320
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{phi}(z_d|x)=mathcal{N}(mu_d, sigma^2_d)$, where $mu_d$ and $sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $mu_d$ and $sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{phi}(z|x)$.
The too noisy signal means $mu_d$ and $sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{phi}(z|x)simeq q_{phi}(z)=mathcal{N}(a,b)$$
which means $mu$ and $sigma$ of posterior become almost disconnected from input $x$. In other words, $mu$ and $sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{phi}(z|x) parallel p(z|x)) = D_{KL}(q_{phi}(z|x) parallel mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48962%2fwhat-is-posterior-collapse-phenomenon%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{phi}(z_d|x)=mathcal{N}(mu_d, sigma^2_d)$, where $mu_d$ and $sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $mu_d$ and $sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{phi}(z|x)$.
The too noisy signal means $mu_d$ and $sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{phi}(z|x)simeq q_{phi}(z)=mathcal{N}(a,b)$$
which means $mu$ and $sigma$ of posterior become almost disconnected from input $x$. In other words, $mu$ and $sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{phi}(z|x) parallel p(z|x)) = D_{KL}(q_{phi}(z|x) parallel mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$.
$endgroup$
add a comment |
$begingroup$
With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{phi}(z_d|x)=mathcal{N}(mu_d, sigma^2_d)$, where $mu_d$ and $sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $mu_d$ and $sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{phi}(z|x)$.
The too noisy signal means $mu_d$ and $sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{phi}(z|x)simeq q_{phi}(z)=mathcal{N}(a,b)$$
which means $mu$ and $sigma$ of posterior become almost disconnected from input $x$. In other words, $mu$ and $sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{phi}(z|x) parallel p(z|x)) = D_{KL}(q_{phi}(z|x) parallel mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$.
$endgroup$
add a comment |
$begingroup$
With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{phi}(z_d|x)=mathcal{N}(mu_d, sigma^2_d)$, where $mu_d$ and $sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $mu_d$ and $sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{phi}(z|x)$.
The too noisy signal means $mu_d$ and $sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{phi}(z|x)simeq q_{phi}(z)=mathcal{N}(a,b)$$
which means $mu$ and $sigma$ of posterior become almost disconnected from input $x$. In other words, $mu$ and $sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{phi}(z|x) parallel p(z|x)) = D_{KL}(q_{phi}(z|x) parallel mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$.
$endgroup$
With the help of better explanations provided in Z-Forcing: Training Stochastic Recurrent Networks:
When posterior is not collapsed, $z_d$ (d-th dimension of latent variable $z$) is sampled from $q_{phi}(z_d|x)=mathcal{N}(mu_d, sigma^2_d)$, where $mu_d$ and $sigma_d$ are stable functions of input $x$. In other words, encoder distills useful information from $x$ into $mu_d$ and $sigma_d$.
We say a posterior is collapsing, when signal from input $x$ to posterior parameters is either too weak or too noisy, and as a result, decoder starts ignoring $z$ samples drawn from the posterior $q_{phi}(z|x)$.
The too noisy signal means $mu_d$ and $sigma_d$ are unstable and thus sampled $z$'s are also unstable, which forces the decoder to ignore them. By "ignore" I mean: output of decoder $hat{x}$ becomes almost independent of $z$, which in practice translates to producing some generic outputs $hat{x}$ that are crude representatives of all seen $x$'s.
The too weak signal translates to
$$q_{phi}(z|x)simeq q_{phi}(z)=mathcal{N}(a,b)$$
which means $mu$ and $sigma$ of posterior become almost disconnected from input $x$. In other words, $mu$ and $sigma$ collapse to constant values $a$, and $b$ channeling a weak (constant) signal from different inputs to decoder. As a result, decoder tries to reconstruct $x$ by ignoring useless $z$'s which are sampled from $mathcal{N}(a,b)$.
Here are some explanations from Z-Forcing: Training Stochastic Recurrent Networks:
In these cases, the posterior approximation tends to provide a too
weak or noisy signal, due to the variance induced by the stochastic
gradient approximation. As a result, the decoder may learn to ignore z
and instead to rely solely on the autoregressive properties of x,
causing x and z to be independent, i.e. the KL term in Eq. 2 vanishes.
and
In various domains, such as text and images, it has been empirically
observed that it is difficult to make use of latent variables when
coupled with a strong autoregressive decoder.
where the simplest form of KL term, for the sake of clarity, is
$$D_{KL}(q_{phi}(z|x) parallel p(z|x)) = D_{KL}(q_{phi}(z|x) parallel mathcal{N}(0,1))$$
The paper uses a more complicated Gaussian prior for $p(z|x)$.
edited yesterday
answered yesterday
EsmailianEsmailian
2,966320
2,966320
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48962%2fwhat-is-posterior-collapse-phenomenon%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown