Neural Networks - Back Propogation
$begingroup$
The following code is my implementation of neural network (1 hidden layer) trying to predict some number based on input data.
- Number of input node: 11
- Number of nodes in hidden layer: 11
- Number of nodes in output layer: 1
m
: number of training examples, here = 4527
X
:[11, m]
matrix
y
:[1, m]
matrix
w1
: weights associated from input layer to hidden layer
b1
: bias vector associated from input layer to hidden layer
w2
: weights associated from hidden layer to output layer
b2
: bias vector associated from hidden layer to output layer
alpha
: learning rate
ite
: number of iteration, here = 10000
Since I'm trying to predict a continuous value output, I'm using sigmoid function in input layers and identity function in output layer
def propagate(X,y,w1,b1,w2,b2,alpha,ite):
assert(X.shape[0] == 11)
assert(y.shape[0] == 1)
assert(X.shape[1] == y.shape[1])
m = X.shape[1]
J = np.zeros(shape=(ite,1))
iteNo = np.zeros(shape=(ite,1))
for i in range(1,ite+1):
z1 = np.dot(w1,X) + b1
a1 = sigmoid(z1)
z2 = np.dot(w2,a1) + b2
dz2 = (z2-y)/m
dw2 = np.dot(dz2,a1.T)
db2 = np.sum(dz2, axis=1, keepdims=True)
dz1 = np.dot(w2.T,dz2)*derivative_of_sigmoid(z1)
dw1 = np.dot(dz1,X.T)
db1 = np.sum(dz1, axis=1, keepdims=True)
w2 = w2 - (alpha*dw2)
b2 = b2 - (alpha*db2)
w1 = w1 - (alpha*dw1)
b1 = b1 - (alpha*db1)
iteNo[i-1] = i
J[i-1] = np.dot((z2-y),(z2-y).T)/(2*m)
print(z2)
return w1,b1,w2,b2,iteNo,J
I have tried both the ways (With feature normalization and scaling & without) but my cost function varies as follows with respect number of iterations (Plotted J).
On $x$-axis: Number of iteration, On $y$-axis: Error $times 10^{12}$.
Please help!
machine-learning python neural-network
New contributor
$endgroup$
add a comment |
$begingroup$
The following code is my implementation of neural network (1 hidden layer) trying to predict some number based on input data.
- Number of input node: 11
- Number of nodes in hidden layer: 11
- Number of nodes in output layer: 1
m
: number of training examples, here = 4527
X
:[11, m]
matrix
y
:[1, m]
matrix
w1
: weights associated from input layer to hidden layer
b1
: bias vector associated from input layer to hidden layer
w2
: weights associated from hidden layer to output layer
b2
: bias vector associated from hidden layer to output layer
alpha
: learning rate
ite
: number of iteration, here = 10000
Since I'm trying to predict a continuous value output, I'm using sigmoid function in input layers and identity function in output layer
def propagate(X,y,w1,b1,w2,b2,alpha,ite):
assert(X.shape[0] == 11)
assert(y.shape[0] == 1)
assert(X.shape[1] == y.shape[1])
m = X.shape[1]
J = np.zeros(shape=(ite,1))
iteNo = np.zeros(shape=(ite,1))
for i in range(1,ite+1):
z1 = np.dot(w1,X) + b1
a1 = sigmoid(z1)
z2 = np.dot(w2,a1) + b2
dz2 = (z2-y)/m
dw2 = np.dot(dz2,a1.T)
db2 = np.sum(dz2, axis=1, keepdims=True)
dz1 = np.dot(w2.T,dz2)*derivative_of_sigmoid(z1)
dw1 = np.dot(dz1,X.T)
db1 = np.sum(dz1, axis=1, keepdims=True)
w2 = w2 - (alpha*dw2)
b2 = b2 - (alpha*db2)
w1 = w1 - (alpha*dw1)
b1 = b1 - (alpha*db1)
iteNo[i-1] = i
J[i-1] = np.dot((z2-y),(z2-y).T)/(2*m)
print(z2)
return w1,b1,w2,b2,iteNo,J
I have tried both the ways (With feature normalization and scaling & without) but my cost function varies as follows with respect number of iterations (Plotted J).
On $x$-axis: Number of iteration, On $y$-axis: Error $times 10^{12}$.
Please help!
machine-learning python neural-network
New contributor
$endgroup$
add a comment |
$begingroup$
The following code is my implementation of neural network (1 hidden layer) trying to predict some number based on input data.
- Number of input node: 11
- Number of nodes in hidden layer: 11
- Number of nodes in output layer: 1
m
: number of training examples, here = 4527
X
:[11, m]
matrix
y
:[1, m]
matrix
w1
: weights associated from input layer to hidden layer
b1
: bias vector associated from input layer to hidden layer
w2
: weights associated from hidden layer to output layer
b2
: bias vector associated from hidden layer to output layer
alpha
: learning rate
ite
: number of iteration, here = 10000
Since I'm trying to predict a continuous value output, I'm using sigmoid function in input layers and identity function in output layer
def propagate(X,y,w1,b1,w2,b2,alpha,ite):
assert(X.shape[0] == 11)
assert(y.shape[0] == 1)
assert(X.shape[1] == y.shape[1])
m = X.shape[1]
J = np.zeros(shape=(ite,1))
iteNo = np.zeros(shape=(ite,1))
for i in range(1,ite+1):
z1 = np.dot(w1,X) + b1
a1 = sigmoid(z1)
z2 = np.dot(w2,a1) + b2
dz2 = (z2-y)/m
dw2 = np.dot(dz2,a1.T)
db2 = np.sum(dz2, axis=1, keepdims=True)
dz1 = np.dot(w2.T,dz2)*derivative_of_sigmoid(z1)
dw1 = np.dot(dz1,X.T)
db1 = np.sum(dz1, axis=1, keepdims=True)
w2 = w2 - (alpha*dw2)
b2 = b2 - (alpha*db2)
w1 = w1 - (alpha*dw1)
b1 = b1 - (alpha*db1)
iteNo[i-1] = i
J[i-1] = np.dot((z2-y),(z2-y).T)/(2*m)
print(z2)
return w1,b1,w2,b2,iteNo,J
I have tried both the ways (With feature normalization and scaling & without) but my cost function varies as follows with respect number of iterations (Plotted J).
On $x$-axis: Number of iteration, On $y$-axis: Error $times 10^{12}$.
Please help!
machine-learning python neural-network
New contributor
$endgroup$
The following code is my implementation of neural network (1 hidden layer) trying to predict some number based on input data.
- Number of input node: 11
- Number of nodes in hidden layer: 11
- Number of nodes in output layer: 1
m
: number of training examples, here = 4527
X
:[11, m]
matrix
y
:[1, m]
matrix
w1
: weights associated from input layer to hidden layer
b1
: bias vector associated from input layer to hidden layer
w2
: weights associated from hidden layer to output layer
b2
: bias vector associated from hidden layer to output layer
alpha
: learning rate
ite
: number of iteration, here = 10000
Since I'm trying to predict a continuous value output, I'm using sigmoid function in input layers and identity function in output layer
def propagate(X,y,w1,b1,w2,b2,alpha,ite):
assert(X.shape[0] == 11)
assert(y.shape[0] == 1)
assert(X.shape[1] == y.shape[1])
m = X.shape[1]
J = np.zeros(shape=(ite,1))
iteNo = np.zeros(shape=(ite,1))
for i in range(1,ite+1):
z1 = np.dot(w1,X) + b1
a1 = sigmoid(z1)
z2 = np.dot(w2,a1) + b2
dz2 = (z2-y)/m
dw2 = np.dot(dz2,a1.T)
db2 = np.sum(dz2, axis=1, keepdims=True)
dz1 = np.dot(w2.T,dz2)*derivative_of_sigmoid(z1)
dw1 = np.dot(dz1,X.T)
db1 = np.sum(dz1, axis=1, keepdims=True)
w2 = w2 - (alpha*dw2)
b2 = b2 - (alpha*db2)
w1 = w1 - (alpha*dw1)
b1 = b1 - (alpha*db1)
iteNo[i-1] = i
J[i-1] = np.dot((z2-y),(z2-y).T)/(2*m)
print(z2)
return w1,b1,w2,b2,iteNo,J
I have tried both the ways (With feature normalization and scaling & without) but my cost function varies as follows with respect number of iterations (Plotted J).
On $x$-axis: Number of iteration, On $y$-axis: Error $times 10^{12}$.
Please help!
machine-learning python neural-network
machine-learning python neural-network
New contributor
New contributor
edited yesterday
Siong Thye Goh
1,122418
1,122418
New contributor
asked yesterday
ChiragChirag
61
61
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Chirag is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45965%2fneural-networks-back-propogation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Chirag is a new contributor. Be nice, and check out our Code of Conduct.
Chirag is a new contributor. Be nice, and check out our Code of Conduct.
Chirag is a new contributor. Be nice, and check out our Code of Conduct.
Chirag is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f45965%2fneural-networks-back-propogation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown