Difference between “reducing batch_size” and “increasing epochs” to decrease loss amount?
$begingroup$
In my experience, both reducing batch_size
and increasing epochs
can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?
For ex, I got same loss amount 2.5e-4
with both the following case:
1. batch_size = 1 , epochs = 100
2. batch_size = 60 , epochs = 1000
Are they same result?
lstm loss-function epochs
$endgroup$
add a comment |
$begingroup$
In my experience, both reducing batch_size
and increasing epochs
can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?
For ex, I got same loss amount 2.5e-4
with both the following case:
1. batch_size = 1 , epochs = 100
2. batch_size = 60 , epochs = 1000
Are they same result?
lstm loss-function epochs
$endgroup$
add a comment |
$begingroup$
In my experience, both reducing batch_size
and increasing epochs
can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?
For ex, I got same loss amount 2.5e-4
with both the following case:
1. batch_size = 1 , epochs = 100
2. batch_size = 60 , epochs = 1000
Are they same result?
lstm loss-function epochs
$endgroup$
In my experience, both reducing batch_size
and increasing epochs
can decrease loss amount. But I like to know is there any difference to using which one? Has decreased loss amount same meaning and it's not important you reached it with what way(I mean has no effects on results)?
For ex, I got same loss amount 2.5e-4
with both the following case:
1. batch_size = 1 , epochs = 100
2. batch_size = 60 , epochs = 1000
Are they same result?
lstm loss-function epochs
lstm loss-function epochs
edited 22 hours ago
user145959
asked 22 hours ago
user145959user145959
1007
1007
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model
A friend of mine ran into these scenarios with a CPU based image classifier:
1) If I use more epochs ,it may take me a lot of time to come to a desirable
outcome.
2) If I prefer small batch-sizes over small epochs , It might take less time
time to compute , but not reach the desirable outcome by that epoch limit.
I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.
I increased epochs, and my accuracy improved until I felt I reached overfitting.
I increased batch-sizes and my accuracy was not increasing at a decent rate.
I had to come to balance at which my model was acceptable.
Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46452%2fdifference-between-reducing-batch-size-and-increasing-epochs-to-decrease-los%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model
A friend of mine ran into these scenarios with a CPU based image classifier:
1) If I use more epochs ,it may take me a lot of time to come to a desirable
outcome.
2) If I prefer small batch-sizes over small epochs , It might take less time
time to compute , but not reach the desirable outcome by that epoch limit.
I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.
I increased epochs, and my accuracy improved until I felt I reached overfitting.
I increased batch-sizes and my accuracy was not increasing at a decent rate.
I had to come to balance at which my model was acceptable.
Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.
New contributor
$endgroup$
add a comment |
$begingroup$
The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model
A friend of mine ran into these scenarios with a CPU based image classifier:
1) If I use more epochs ,it may take me a lot of time to come to a desirable
outcome.
2) If I prefer small batch-sizes over small epochs , It might take less time
time to compute , but not reach the desirable outcome by that epoch limit.
I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.
I increased epochs, and my accuracy improved until I felt I reached overfitting.
I increased batch-sizes and my accuracy was not increasing at a decent rate.
I had to come to balance at which my model was acceptable.
Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.
New contributor
$endgroup$
add a comment |
$begingroup$
The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model
A friend of mine ran into these scenarios with a CPU based image classifier:
1) If I use more epochs ,it may take me a lot of time to come to a desirable
outcome.
2) If I prefer small batch-sizes over small epochs , It might take less time
time to compute , but not reach the desirable outcome by that epoch limit.
I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.
I increased epochs, and my accuracy improved until I felt I reached overfitting.
I increased batch-sizes and my accuracy was not increasing at a decent rate.
I had to come to balance at which my model was acceptable.
Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.
New contributor
$endgroup$
The concept of overfitting does not generalize over a specific combination of batch size and epochs. It depends on your data and the architecture of your model
A friend of mine ran into these scenarios with a CPU based image classifier:
1) If I use more epochs ,it may take me a lot of time to come to a desirable
outcome.
2) If I prefer small batch-sizes over small epochs , It might take less time
time to compute , but not reach the desirable outcome by that epoch limit.
I used a GPU and my results were different. Using low epochs , and better convolutional architecture , I reached a better accuracy with not so small batch sizes.
I increased epochs, and my accuracy improved until I felt I reached overfitting.
I increased batch-sizes and my accuracy was not increasing at a decent rate.
I had to come to balance at which my model was acceptable.
Its a balance , yes. A balance which I am afraid, the designer needs to take care of. Not always they are inversely proportional. The data set and the architecture makes all the difference in this debate I am afraid.
New contributor
New contributor
answered 19 hours ago
Savinay_Savinay_
444
444
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f46452%2fdifference-between-reducing-batch-size-and-increasing-epochs-to-decrease-los%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown