Odd Loss Curves for Object Detection Task
$begingroup$
I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.
Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.
As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.
As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.
The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.
Here's some images from TensorBoard to show what I'm seeing:
Any insights?
neural-network tensorflow object-detection
New contributor
$endgroup$
add a comment |
$begingroup$
I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.
Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.
As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.
As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.
The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.
Here's some images from TensorBoard to show what I'm seeing:
Any insights?
neural-network tensorflow object-detection
New contributor
$endgroup$
add a comment |
$begingroup$
I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.
Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.
As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.
As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.
The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.
Here's some images from TensorBoard to show what I'm seeing:
Any insights?
neural-network tensorflow object-detection
New contributor
$endgroup$
I'm re-training a Single Shot Detector (specifically the ssdlite_mobilenet_v2_coco from the TensorFlow model zoo) to detect some new images. I have about 15k images in the training set and about 4k in the eval set. The mini-batch size is 24. Otherwise the setting are as per the defaults from the model zoo.
Note that after training the model gives excellent performance on the test set (about another 3k images). My issue isn't the quality of the model, but understanding the loss curves I'm seeing.
As expected, on the first few epochs the bounding box predictions are all over the place and wildly inaccurate. Very quickly the net learns that it's better to predict nothing. At this point I'd expect to see the training and evaluation loss drop enormously, but I see only the training loss drop - performance on the evaluation set is virtually unchanged.
As training progresses the bounding box predictions and the classifications for those boxes get more and more accurate. What I'd expect to see is both the training and evaluation losses dropping with, perhaps, the training loss dropping faster. What I actually see is the training loss remaining nearly constant but the evaluation set loss continues to drop so somehow we're not improving on the training set but our generalization performance is improving which seems quite odd to me.
The model is regularized so it's possible that the net is, in fact, learning more generalizable solutions that yield similar training set performance. However, the regularization loss continues to grow too which would seem to indicate that the model isn't doing that.
Here's some images from TensorBoard to show what I'm seeing:
Any insights?
neural-network tensorflow object-detection
neural-network tensorflow object-detection
New contributor
New contributor
New contributor
asked 10 hours ago
OliverOliver
1
1
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Oliver is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45116%2fodd-loss-curves-for-object-detection-task%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Oliver is a new contributor. Be nice, and check out our Code of Conduct.
Oliver is a new contributor. Be nice, and check out our Code of Conduct.
Oliver is a new contributor. Be nice, and check out our Code of Conduct.
Oliver is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45116%2fodd-loss-curves-for-object-detection-task%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown