What does the embedding mean in the FaceNet?
$begingroup$
I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.
cnn embeddings
$endgroup$
add a comment |
$begingroup$
I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.
cnn embeddings
$endgroup$
add a comment |
$begingroup$
I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.
cnn embeddings
$endgroup$
I am reading the paper about FaceNet but I can't get what does the embedding mean in this paper? Is it a hidden layer of the deep CNN?
P.S. English isn't my native language.
cnn embeddings
cnn embeddings
asked Nov 13 '18 at 22:50
ШахШах
1257
1257
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
$begingroup$
Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.
$endgroup$
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
add a comment |
$begingroup$
An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.
- Tensorflow/Embeddings
With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_{i}^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.
I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41185%2fwhat-does-the-embedding-mean-in-the-facenet%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.
$endgroup$
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
add a comment |
$begingroup$
Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.
$endgroup$
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
add a comment |
$begingroup$
Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.
$endgroup$
Assume you have features wich lie in a $R^n$ space, e.g. your input is a picture with $28 times 28$ pixels, then $n$ would be $28 times 28 = 784$.
Now you can "embedd" your features into another $R^d$ space, where often $d < n$. This way you learn a rich representation of your input. When you compress your $784$ input-pixels to, lets say $64$ you compressed your input by more than a factor $10$ and can elimnate redundant/useless features.
This embedding learning is of course done in such a way, that you could fully restore your original $784$ pixels out of your $64$ compressed pixels.
edited Nov 15 '18 at 9:32
answered Nov 14 '18 at 8:13
Andreas LookAndreas Look
431110
431110
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
add a comment |
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
$begingroup$
Thank you for simple and clear explanation, seems I could get it!
$endgroup$
– Шах
Nov 14 '18 at 20:52
add a comment |
$begingroup$
An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.
- Tensorflow/Embeddings
With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_{i}^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.
I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.
$endgroup$
add a comment |
$begingroup$
An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.
- Tensorflow/Embeddings
With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_{i}^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.
I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.
$endgroup$
add a comment |
$begingroup$
An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.
- Tensorflow/Embeddings
With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_{i}^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.
I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.
$endgroup$
An embedding is a mapping from discrete objects, such as words, to vectors of real numbers.
- Tensorflow/Embeddings
With reference to the FaceNet paper, I should say that embedding here simply means the tensor obtained by performing a forward propagation over an image. The obtained embedding of the image and the target image are then compared to find the closeness among the images. The loss function specified by the equation $L=sum_{i}^N [ ||f(x_i^a)-f(x_i^p)||_2^2-||f(x_i^a)-f(x_i^n)||_2^2+alpha]_+ $ is simply used here to to find the euclidean distance between the generated embedding of the anchor, positive and negative images obtained by performing forward propagation. This technique is also called as Siamese network.
I suggest you to read the full original FaceNet paper for clearer understanding of the FaceNet architecture and working.
edited yesterday
answered Nov 14 '18 at 20:23
thanatozthanatoz
569319
569319
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41185%2fwhat-does-the-embedding-mean-in-the-facenet%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown