SVM with Tensorflow
$begingroup$
I have an array of Numpy with the following data, for example:
['13 .398249765480822 ''19 .324784598731966' '80 .98629514090669 '
'-3.703122956721927e-06' '80 .98629884402965 ''24 .008452881790028'
'679.6408224307851' '2498.8247399799975', 'fear']
And another array of Numpy with the same length and different numbers and another label that is 'neutral'.
The fact is that I'm using the code (Setosa) of Github and other articles to make a binary classifier (fear or neutral) but I get the following error because I do not know how to do so that I take into account all the numbers in the array and not as the code of Setosa, which only takes into account two when performing the mesh.
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([[x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]] for x in matrix])
y_vals = np.array([1 if y[8] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = 150
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 8], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, 8], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
grid_predictions = grid_predictions.reshape(xx.shape)
# Plot points and grid
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
The error obtained is:
File "test.py", line 154, in <module>
prediction_grid: grid_points})
ValueError: Cannot feed value of shape (30119320, 2) for Tensor u'Placeholder_2:0', which has shape '(?, 8)'
I know they do not have the same shape but I do not know how to change it or what to do because I need to make a classifier with the 8 features and with the two classes, 'neutral' and 'fear'.
Original code is here.
classification tensorflow svm
New contributor
$endgroup$
add a comment |
$begingroup$
I have an array of Numpy with the following data, for example:
['13 .398249765480822 ''19 .324784598731966' '80 .98629514090669 '
'-3.703122956721927e-06' '80 .98629884402965 ''24 .008452881790028'
'679.6408224307851' '2498.8247399799975', 'fear']
And another array of Numpy with the same length and different numbers and another label that is 'neutral'.
The fact is that I'm using the code (Setosa) of Github and other articles to make a binary classifier (fear or neutral) but I get the following error because I do not know how to do so that I take into account all the numbers in the array and not as the code of Setosa, which only takes into account two when performing the mesh.
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([[x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]] for x in matrix])
y_vals = np.array([1 if y[8] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = 150
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 8], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, 8], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
grid_predictions = grid_predictions.reshape(xx.shape)
# Plot points and grid
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
The error obtained is:
File "test.py", line 154, in <module>
prediction_grid: grid_points})
ValueError: Cannot feed value of shape (30119320, 2) for Tensor u'Placeholder_2:0', which has shape '(?, 8)'
I know they do not have the same shape but I do not know how to change it or what to do because I need to make a classifier with the 8 features and with the two classes, 'neutral' and 'fear'.
Original code is here.
classification tensorflow svm
New contributor
$endgroup$
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago
add a comment |
$begingroup$
I have an array of Numpy with the following data, for example:
['13 .398249765480822 ''19 .324784598731966' '80 .98629514090669 '
'-3.703122956721927e-06' '80 .98629884402965 ''24 .008452881790028'
'679.6408224307851' '2498.8247399799975', 'fear']
And another array of Numpy with the same length and different numbers and another label that is 'neutral'.
The fact is that I'm using the code (Setosa) of Github and other articles to make a binary classifier (fear or neutral) but I get the following error because I do not know how to do so that I take into account all the numbers in the array and not as the code of Setosa, which only takes into account two when performing the mesh.
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([[x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]] for x in matrix])
y_vals = np.array([1 if y[8] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = 150
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 8], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, 8], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
grid_predictions = grid_predictions.reshape(xx.shape)
# Plot points and grid
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
The error obtained is:
File "test.py", line 154, in <module>
prediction_grid: grid_points})
ValueError: Cannot feed value of shape (30119320, 2) for Tensor u'Placeholder_2:0', which has shape '(?, 8)'
I know they do not have the same shape but I do not know how to change it or what to do because I need to make a classifier with the 8 features and with the two classes, 'neutral' and 'fear'.
Original code is here.
classification tensorflow svm
New contributor
$endgroup$
I have an array of Numpy with the following data, for example:
['13 .398249765480822 ''19 .324784598731966' '80 .98629514090669 '
'-3.703122956721927e-06' '80 .98629884402965 ''24 .008452881790028'
'679.6408224307851' '2498.8247399799975', 'fear']
And another array of Numpy with the same length and different numbers and another label that is 'neutral'.
The fact is that I'm using the code (Setosa) of Github and other articles to make a binary classifier (fear or neutral) but I get the following error because I do not know how to do so that I take into account all the numbers in the array and not as the code of Setosa, which only takes into account two when performing the mesh.
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([[x[0], x[1], x[2], x[3], x[4], x[5], x[6], x[7]] for x in matrix])
y_vals = np.array([1 if y[8] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = 150
# Initialize placeholders
x_data = tf.placeholder(shape=[None, 8], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, 8], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
grid_predictions = grid_predictions.reshape(xx.shape)
# Plot points and grid
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
The error obtained is:
File "test.py", line 154, in <module>
prediction_grid: grid_points})
ValueError: Cannot feed value of shape (30119320, 2) for Tensor u'Placeholder_2:0', which has shape '(?, 8)'
I know they do not have the same shape but I do not know how to change it or what to do because I need to make a classifier with the 8 features and with the two classes, 'neutral' and 'fear'.
Original code is here.
classification tensorflow svm
classification tensorflow svm
New contributor
New contributor
edited 3 hours ago
Stephen Rauch
1,52551330
1,52551330
New contributor
asked 8 hours ago
ManuManu
62
62
New contributor
New contributor
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago
add a comment |
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
This code is written only for 2D inputs, it cannot be used for 8D inputs.
Here is an example on stackoverflow for tensorflow's SVM tf.contrib.learn.SVM
.
Also, here is an easy to use SVM example in python (without tensorflow).
About the code
The 2D assumption is deeply integrated into the code for prediction_grid
variable and the plots.
An important section is when a grid needs to be created:
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
which creates a $150^2 times 2$ grid_points
. This grid is later used for 2D plots. Since grid_points
size is $150^d times d$, it raises MemoryError
for 8D (even for 4D).
Here is an altered version of the code that I used to experiment with higher dimensions. It avoids Memory Error
by changing the grid step from 0.02 to 1, thus decreasing $150^d$ to $3^d$ (increase the grid_step
for wider ranges of inputs).
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
dimension = 8
N = 300
grid_step = 1 # default value was 0.02
x_dummy = np.random.random((N, dimension))
y_dummy = np.random.choice(['fear', 'abc'], (N, 1))
matrix = np.hstack((x_dummy, y_dummy))
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([x[0:dimension] for x in matrix])
y_vals = np.array([1 if y[dimension] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = N
# Initialize placeholders
x_data = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
# this code is used as a generalization to work with all dimensions
x_ranges = np.vstack((x_vals.min(axis=0) - 1, x_vals.max(axis=0) + 1)).T
aranges = [np.arange(x_range[0], x_range[1], grid_step) for x_range in x_ranges]
print('grid size:', np.power(len(aranges[0]), dimension))
meshes = np.meshgrid(*aranges)
grid_points = np.vstack(tuple([mesh.ravel() for mesh in meshes])).T
print('grid size:', grid_points.shape)
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
# Plot points and grid
# this is the old mesh generation code that is kept since it is used in the plots
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx_arange = np.arange(x_min, x_max, grid_step)
yy_arange = np.arange(y_min, y_max, grid_step)
xx, yy = np.meshgrid(xx_arange,yy_arange)
size = np.power(len(xx), 2)
grid_predictions = grid_predictions[0:size].reshape(xx.shape)
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Output:
Step #75
Loss = -251.9497
Step #150
Loss = -476.96854
Step #225
Loss = -701.92444
Step #300
Loss = -927.2843
grid size: 6561
grid size: (6561, 8)
$endgroup$
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
|
show 3 more comments
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Manu is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48624%2fsvm-with-tensorflow%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
This code is written only for 2D inputs, it cannot be used for 8D inputs.
Here is an example on stackoverflow for tensorflow's SVM tf.contrib.learn.SVM
.
Also, here is an easy to use SVM example in python (without tensorflow).
About the code
The 2D assumption is deeply integrated into the code for prediction_grid
variable and the plots.
An important section is when a grid needs to be created:
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
which creates a $150^2 times 2$ grid_points
. This grid is later used for 2D plots. Since grid_points
size is $150^d times d$, it raises MemoryError
for 8D (even for 4D).
Here is an altered version of the code that I used to experiment with higher dimensions. It avoids Memory Error
by changing the grid step from 0.02 to 1, thus decreasing $150^d$ to $3^d$ (increase the grid_step
for wider ranges of inputs).
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
dimension = 8
N = 300
grid_step = 1 # default value was 0.02
x_dummy = np.random.random((N, dimension))
y_dummy = np.random.choice(['fear', 'abc'], (N, 1))
matrix = np.hstack((x_dummy, y_dummy))
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([x[0:dimension] for x in matrix])
y_vals = np.array([1 if y[dimension] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = N
# Initialize placeholders
x_data = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
# this code is used as a generalization to work with all dimensions
x_ranges = np.vstack((x_vals.min(axis=0) - 1, x_vals.max(axis=0) + 1)).T
aranges = [np.arange(x_range[0], x_range[1], grid_step) for x_range in x_ranges]
print('grid size:', np.power(len(aranges[0]), dimension))
meshes = np.meshgrid(*aranges)
grid_points = np.vstack(tuple([mesh.ravel() for mesh in meshes])).T
print('grid size:', grid_points.shape)
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
# Plot points and grid
# this is the old mesh generation code that is kept since it is used in the plots
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx_arange = np.arange(x_min, x_max, grid_step)
yy_arange = np.arange(y_min, y_max, grid_step)
xx, yy = np.meshgrid(xx_arange,yy_arange)
size = np.power(len(xx), 2)
grid_predictions = grid_predictions[0:size].reshape(xx.shape)
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Output:
Step #75
Loss = -251.9497
Step #150
Loss = -476.96854
Step #225
Loss = -701.92444
Step #300
Loss = -927.2843
grid size: 6561
grid size: (6561, 8)
$endgroup$
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
|
show 3 more comments
$begingroup$
This code is written only for 2D inputs, it cannot be used for 8D inputs.
Here is an example on stackoverflow for tensorflow's SVM tf.contrib.learn.SVM
.
Also, here is an easy to use SVM example in python (without tensorflow).
About the code
The 2D assumption is deeply integrated into the code for prediction_grid
variable and the plots.
An important section is when a grid needs to be created:
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
which creates a $150^2 times 2$ grid_points
. This grid is later used for 2D plots. Since grid_points
size is $150^d times d$, it raises MemoryError
for 8D (even for 4D).
Here is an altered version of the code that I used to experiment with higher dimensions. It avoids Memory Error
by changing the grid step from 0.02 to 1, thus decreasing $150^d$ to $3^d$ (increase the grid_step
for wider ranges of inputs).
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
dimension = 8
N = 300
grid_step = 1 # default value was 0.02
x_dummy = np.random.random((N, dimension))
y_dummy = np.random.choice(['fear', 'abc'], (N, 1))
matrix = np.hstack((x_dummy, y_dummy))
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([x[0:dimension] for x in matrix])
y_vals = np.array([1 if y[dimension] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = N
# Initialize placeholders
x_data = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
# this code is used as a generalization to work with all dimensions
x_ranges = np.vstack((x_vals.min(axis=0) - 1, x_vals.max(axis=0) + 1)).T
aranges = [np.arange(x_range[0], x_range[1], grid_step) for x_range in x_ranges]
print('grid size:', np.power(len(aranges[0]), dimension))
meshes = np.meshgrid(*aranges)
grid_points = np.vstack(tuple([mesh.ravel() for mesh in meshes])).T
print('grid size:', grid_points.shape)
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
# Plot points and grid
# this is the old mesh generation code that is kept since it is used in the plots
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx_arange = np.arange(x_min, x_max, grid_step)
yy_arange = np.arange(y_min, y_max, grid_step)
xx, yy = np.meshgrid(xx_arange,yy_arange)
size = np.power(len(xx), 2)
grid_predictions = grid_predictions[0:size].reshape(xx.shape)
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Output:
Step #75
Loss = -251.9497
Step #150
Loss = -476.96854
Step #225
Loss = -701.92444
Step #300
Loss = -927.2843
grid size: 6561
grid size: (6561, 8)
$endgroup$
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
|
show 3 more comments
$begingroup$
This code is written only for 2D inputs, it cannot be used for 8D inputs.
Here is an example on stackoverflow for tensorflow's SVM tf.contrib.learn.SVM
.
Also, here is an easy to use SVM example in python (without tensorflow).
About the code
The 2D assumption is deeply integrated into the code for prediction_grid
variable and the plots.
An important section is when a grid needs to be created:
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
which creates a $150^2 times 2$ grid_points
. This grid is later used for 2D plots. Since grid_points
size is $150^d times d$, it raises MemoryError
for 8D (even for 4D).
Here is an altered version of the code that I used to experiment with higher dimensions. It avoids Memory Error
by changing the grid step from 0.02 to 1, thus decreasing $150^d$ to $3^d$ (increase the grid_step
for wider ranges of inputs).
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
dimension = 8
N = 300
grid_step = 1 # default value was 0.02
x_dummy = np.random.random((N, dimension))
y_dummy = np.random.choice(['fear', 'abc'], (N, 1))
matrix = np.hstack((x_dummy, y_dummy))
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([x[0:dimension] for x in matrix])
y_vals = np.array([1 if y[dimension] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = N
# Initialize placeholders
x_data = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
# this code is used as a generalization to work with all dimensions
x_ranges = np.vstack((x_vals.min(axis=0) - 1, x_vals.max(axis=0) + 1)).T
aranges = [np.arange(x_range[0], x_range[1], grid_step) for x_range in x_ranges]
print('grid size:', np.power(len(aranges[0]), dimension))
meshes = np.meshgrid(*aranges)
grid_points = np.vstack(tuple([mesh.ravel() for mesh in meshes])).T
print('grid size:', grid_points.shape)
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
# Plot points and grid
# this is the old mesh generation code that is kept since it is used in the plots
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx_arange = np.arange(x_min, x_max, grid_step)
yy_arange = np.arange(y_min, y_max, grid_step)
xx, yy = np.meshgrid(xx_arange,yy_arange)
size = np.power(len(xx), 2)
grid_predictions = grid_predictions[0:size].reshape(xx.shape)
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Output:
Step #75
Loss = -251.9497
Step #150
Loss = -476.96854
Step #225
Loss = -701.92444
Step #300
Loss = -927.2843
grid size: 6561
grid size: (6561, 8)
$endgroup$
This code is written only for 2D inputs, it cannot be used for 8D inputs.
Here is an example on stackoverflow for tensorflow's SVM tf.contrib.learn.SVM
.
Also, here is an easy to use SVM example in python (without tensorflow).
About the code
The 2D assumption is deeply integrated into the code for prediction_grid
variable and the plots.
An important section is when a grid needs to be created:
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),
np.arange(y_min, y_max, 0.02))
grid_points = np.c_[xx.ravel(), yy.ravel()]
which creates a $150^2 times 2$ grid_points
. This grid is later used for 2D plots. Since grid_points
size is $150^d times d$, it raises MemoryError
for 8D (even for 4D).
Here is an altered version of the code that I used to experiment with higher dimensions. It avoids Memory Error
by changing the grid step from 0.02 to 1, thus decreasing $150^d$ to $3^d$ (increase the grid_step
for wider ranges of inputs).
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
dimension = 8
N = 300
grid_step = 1 # default value was 0.02
x_dummy = np.random.random((N, dimension))
y_dummy = np.random.choice(['fear', 'abc'], (N, 1))
matrix = np.hstack((x_dummy, y_dummy))
## SVM con Tensorflow
sess = tf.Session()
x_vals = np.array([x[0:dimension] for x in matrix])
y_vals = np.array([1 if y[dimension] == 'fear' else -1 for y in matrix])
# Split the train data and testing data
train_indices = np.random.choice(len(x_vals), int(round(len(x_vals)*0.8)), replace=False)
test_indices = np.array(list(set(range(len(x_vals))) - set(train_indices)))
x_vals_train = x_vals[train_indices]
x_vals_test = x_vals[test_indices]
y_vals_train = y_vals[train_indices]
y_vals_test = y_vals[test_indices]
class1_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class1_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == 1]
class2_x = [x[0] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
class2_y = [x[1] for i, x in enumerate(x_vals_train) if y_vals_train[i] == -1]
# Declare batch size
batch_size = N
# Initialize placeholders
x_data = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
y_target = tf.placeholder(shape=[None, 1], dtype=tf.float32)
prediction_grid = tf.placeholder(shape=[None, dimension], dtype=tf.float32)
# Create variables for svm
b = tf.Variable(tf.random_normal(shape=[1, batch_size]))
# Gaussian (RBF) kernel
gamma = tf.constant(-10.0)
sq_dists = tf.multiply(2., tf.matmul(x_data, tf.transpose(x_data)))
my_kernel = tf.exp(tf.multiply(gamma, tf.abs(sq_dists)))
# Compute SVM Model
first_term = tf.reduce_sum(b)
b_vec_cross = tf.matmul(tf.transpose(b), b)
y_target_cross = tf.matmul(y_target, tf.transpose(y_target))
second_term = tf.reduce_sum(tf.multiply(my_kernel, tf.multiply(b_vec_cross, y_target_cross)))
loss = tf.negative(tf.subtract(first_term, second_term))
# Gaussian (RBF) prediction kernel
rA = tf.reshape(tf.reduce_sum(tf.square(x_data), 1), [-1, 1])
rB = tf.reshape(tf.reduce_sum(tf.square(prediction_grid), 1), [-1, 1])
pred_sq_dist = tf.add(tf.subtract(rA, tf.multiply(2., tf.matmul(x_data, tf.transpose(prediction_grid)))), tf.transpose(rB))
pred_kernel = tf.exp(tf.multiply(gamma, tf.abs(pred_sq_dist)))
prediction_output = tf.matmul(tf.multiply(tf.transpose(y_target), b), pred_kernel)
prediction = tf.sign(prediction_output - tf.reduce_mean(prediction_output))
accuracy = tf.reduce_mean(tf.cast(tf.equal(tf.squeeze(prediction), tf.squeeze(y_target)), tf.float32))
# Declare optimizer
my_opt = tf.train.GradientDescentOptimizer(0.01)
train_step = my_opt.minimize(loss)
# Initialize variables
init = tf.global_variables_initializer()
sess.run(init)
# Training loop
loss_vec =
batch_accuracy =
for i in range(300):
rand_index = np.random.choice(len(x_vals), size=batch_size)
rand_x = x_vals[rand_index]
rand_y = np.transpose([y_vals[rand_index]])
sess.run(train_step, feed_dict={x_data: rand_x, y_target: rand_y})
temp_loss = sess.run(loss, feed_dict={x_data: rand_x, y_target: rand_y})
loss_vec.append(temp_loss)
acc_temp = sess.run(accuracy, feed_dict={x_data: rand_x,
y_target: rand_y,
prediction_grid: rand_x})
batch_accuracy.append(acc_temp)
if (i + 1) % 75 == 0:
print('Step #' + str(i + 1))
print('Loss = ' + str(temp_loss))
# Create a mesh to plot points in
x_vals = x_vals.astype(np.float)
# this code is used as a generalization to work with all dimensions
x_ranges = np.vstack((x_vals.min(axis=0) - 1, x_vals.max(axis=0) + 1)).T
aranges = [np.arange(x_range[0], x_range[1], grid_step) for x_range in x_ranges]
print('grid size:', np.power(len(aranges[0]), dimension))
meshes = np.meshgrid(*aranges)
grid_points = np.vstack(tuple([mesh.ravel() for mesh in meshes])).T
print('grid size:', grid_points.shape)
[grid_predictions] = sess.run(prediction, feed_dict={x_data: x_vals,
y_target: np.transpose([y_vals]),
prediction_grid: grid_points})
# Plot points and grid
# this is the old mesh generation code that is kept since it is used in the plots
x_min, x_max = x_vals[:, 0].min() - 1, x_vals[:, 0].max() + 1
y_min, y_max = x_vals[:, 1].min() - 1, x_vals[:, 1].max() + 1
xx_arange = np.arange(x_min, x_max, grid_step)
yy_arange = np.arange(y_min, y_max, grid_step)
xx, yy = np.meshgrid(xx_arange,yy_arange)
size = np.power(len(xx), 2)
grid_predictions = grid_predictions[0:size].reshape(xx.shape)
plt.contourf(xx, yy, grid_predictions, cmap=plt.cm.Paired, alpha=0.8)
plt.plot(class1_x, class1_y, 'ro', label='I. setosa')
plt.plot(class2_x, class2_y, 'kx', label='Non setosa')
plt.title('Gaussian SVM Results on Iris Data')
plt.xlabel('Petal Length')
plt.ylabel('Sepal Width')
plt.legend(loc='lower right')
plt.ylim([-0.5, 3.0])
plt.xlim([3.5, 8.5])
plt.show()
# Plot batch accuracy
plt.plot(batch_accuracy, 'k-', label='Accuracy')
plt.title('Batch Accuracy')
plt.xlabel('Generation')
plt.ylabel('Accuracy')
plt.legend(loc='lower right')
plt.show()
# Plot loss over time
plt.plot(loss_vec, 'k-')
plt.title('Loss per Generation')
plt.xlabel('Generation')
plt.ylabel('Loss')
plt.show()
Output:
Step #75
Loss = -251.9497
Step #150
Loss = -476.96854
Step #225
Loss = -701.92444
Step #300
Loss = -927.2843
grid size: 6561
grid size: (6561, 8)
edited 3 hours ago
answered 6 hours ago
EsmailianEsmailian
2,581318
2,581318
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
|
show 3 more comments
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
Thanks for the answer. Therefore, if I have understood correctly there is no way to perform SVM with Tensorflow with an 8D dimension. Is there another way to perform SVM with 8D as you say without being Tensorflow? I have to do it in python for my teacher (he does it in Matlab with 22D)
$endgroup$
– Manu
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu I’m happy to help.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
@Manu you can use SVM for way higher dimensions, just not THIS code. I've added another non-tensorflow resource, see if it helps.
$endgroup$
– Esmailian
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
with you code I obtained this error: ValueError: broadcast dimensions too large. In meshes = np.meshgrid(*aranges)
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
$begingroup$
Still the same...
$endgroup$
– Manu
5 hours ago
|
show 3 more comments
Manu is a new contributor. Be nice, and check out our Code of Conduct.
Manu is a new contributor. Be nice, and check out our Code of Conduct.
Manu is a new contributor. Be nice, and check out our Code of Conduct.
Manu is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48624%2fsvm-with-tensorflow%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Please provide a link to the code for later references.
$endgroup$
– Esmailian
6 hours ago