Where is my error in understanding gradient descent calculated two different ways?
$begingroup$
The gradient descent algorithm is, most simply, w'(i) = w(i)-r*dC/dw(i) where w(i) are the old weights, w'(i) are the new weights, C is the cost, r is the learning rate. I'm aware of the graphical justification for this.
For one weight, this is w' = w - r*dC/dw.
Second, we also have this equation deltaC ~= sum(dC/dw(i) * deltaw(i) ), which is just the definition of linearity of C near the point that its derivative is calculated. For one weight, this is the same as deltaC/deltaw ~= dC/dw, e.g., the definition of derivative.
Let there be only one weight, and let s = -deltaC. Then we have -s = dC/dw * (w'-w), where we've split up deltaw into the original and perturbed value. Then w'-w = s * (1/ (dC/dw)), and w' = w -s * (1/dC/dw). (Since we want to reduce the cost, we want deltaC to be <= 0, so s is >= 0, and looks like a normal positive learning rate.)
What I haven't been able to understand is why I get two different answers for what appears to be the same operation, updating the weights to lower the cost. In one case I use dC/dw, and in the other, I use 1/(dC/dw.) In both cases, r and s are small positive numbers.
What am I missing?
machine-learning gradient-descent
New contributor
$endgroup$
add a comment |
$begingroup$
The gradient descent algorithm is, most simply, w'(i) = w(i)-r*dC/dw(i) where w(i) are the old weights, w'(i) are the new weights, C is the cost, r is the learning rate. I'm aware of the graphical justification for this.
For one weight, this is w' = w - r*dC/dw.
Second, we also have this equation deltaC ~= sum(dC/dw(i) * deltaw(i) ), which is just the definition of linearity of C near the point that its derivative is calculated. For one weight, this is the same as deltaC/deltaw ~= dC/dw, e.g., the definition of derivative.
Let there be only one weight, and let s = -deltaC. Then we have -s = dC/dw * (w'-w), where we've split up deltaw into the original and perturbed value. Then w'-w = s * (1/ (dC/dw)), and w' = w -s * (1/dC/dw). (Since we want to reduce the cost, we want deltaC to be <= 0, so s is >= 0, and looks like a normal positive learning rate.)
What I haven't been able to understand is why I get two different answers for what appears to be the same operation, updating the weights to lower the cost. In one case I use dC/dw, and in the other, I use 1/(dC/dw.) In both cases, r and s are small positive numbers.
What am I missing?
machine-learning gradient-descent
New contributor
$endgroup$
add a comment |
$begingroup$
The gradient descent algorithm is, most simply, w'(i) = w(i)-r*dC/dw(i) where w(i) are the old weights, w'(i) are the new weights, C is the cost, r is the learning rate. I'm aware of the graphical justification for this.
For one weight, this is w' = w - r*dC/dw.
Second, we also have this equation deltaC ~= sum(dC/dw(i) * deltaw(i) ), which is just the definition of linearity of C near the point that its derivative is calculated. For one weight, this is the same as deltaC/deltaw ~= dC/dw, e.g., the definition of derivative.
Let there be only one weight, and let s = -deltaC. Then we have -s = dC/dw * (w'-w), where we've split up deltaw into the original and perturbed value. Then w'-w = s * (1/ (dC/dw)), and w' = w -s * (1/dC/dw). (Since we want to reduce the cost, we want deltaC to be <= 0, so s is >= 0, and looks like a normal positive learning rate.)
What I haven't been able to understand is why I get two different answers for what appears to be the same operation, updating the weights to lower the cost. In one case I use dC/dw, and in the other, I use 1/(dC/dw.) In both cases, r and s are small positive numbers.
What am I missing?
machine-learning gradient-descent
New contributor
$endgroup$
The gradient descent algorithm is, most simply, w'(i) = w(i)-r*dC/dw(i) where w(i) are the old weights, w'(i) are the new weights, C is the cost, r is the learning rate. I'm aware of the graphical justification for this.
For one weight, this is w' = w - r*dC/dw.
Second, we also have this equation deltaC ~= sum(dC/dw(i) * deltaw(i) ), which is just the definition of linearity of C near the point that its derivative is calculated. For one weight, this is the same as deltaC/deltaw ~= dC/dw, e.g., the definition of derivative.
Let there be only one weight, and let s = -deltaC. Then we have -s = dC/dw * (w'-w), where we've split up deltaw into the original and perturbed value. Then w'-w = s * (1/ (dC/dw)), and w' = w -s * (1/dC/dw). (Since we want to reduce the cost, we want deltaC to be <= 0, so s is >= 0, and looks like a normal positive learning rate.)
What I haven't been able to understand is why I get two different answers for what appears to be the same operation, updating the weights to lower the cost. In one case I use dC/dw, and in the other, I use 1/(dC/dw.) In both cases, r and s are small positive numbers.
What am I missing?
machine-learning gradient-descent
machine-learning gradient-descent
New contributor
New contributor
New contributor
asked 5 mins ago
Walt DonovanWalt Donovan
1
1
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Walt Donovan is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45556%2fwhere-is-my-error-in-understanding-gradient-descent-calculated-two-different-way%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Walt Donovan is a new contributor. Be nice, and check out our Code of Conduct.
Walt Donovan is a new contributor. Be nice, and check out our Code of Conduct.
Walt Donovan is a new contributor. Be nice, and check out our Code of Conduct.
Walt Donovan is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45556%2fwhere-is-my-error-in-understanding-gradient-descent-calculated-two-different-way%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown