Splitting and training multiple datasets at the same time
$begingroup$
I've got 15 different datasets at about 10GB each. Each dataset comes with a binary 2D ground truth (10486147ish, 1) that I pull from it. I'm trying to figure out how to load each dataset, split them all with scikitlearn's train_test_split, then iterate over all 15 datasets per epoch. Under normal circumstances, the datasets would be shuffled as well, but I cannot figure out how to even do that since the data is too large to load all at once to shuffle (as such shuffling them is on the back burner for now).
Here's what my code looks like for one dataset.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import sequence
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection imporrt train_test_split
arr = np.load ('source/dir/dataset1.npy', allow_pickle = True, fix_imports = True)
arr[arr == -inf = -9999]
rehape = arr.reshape(((arr.shape[0])*(arr.shape[1])), (arr.shape[2]))
drop = reshape[~np.all(reshape == -9999, axis = 1)]
#additional work done with -9999 here
truth = drop[:,46]
data = drop[:,0:45]
#callbacks deleted in code sample
encoder = LabelEncoder()
encoder.fit(truth)
Y = encoder.transform(truth)
Y = Y.reshape(10486147, 1)
X = data.reshape(10486147, 45, 3)
seed = 7
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.33, random_state = seed)
model = sequential()
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3)))
model.add(Dense(1, kernel_initializer = 'normal', activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 500, batch_size = 1000, callbacks = [deleted callbacks])
So that makes sense for one dataset, but as I've said before I've got 15 datasets to iterate through, and I don't think retraining on new data is the right step.
Is there a way through dataset1.npy through dataset15.npy while properly splitting the ground truth as well.
Any suggestions?
python keras scikit-learn tensorflow stacked-lstm
$endgroup$
add a comment |
$begingroup$
I've got 15 different datasets at about 10GB each. Each dataset comes with a binary 2D ground truth (10486147ish, 1) that I pull from it. I'm trying to figure out how to load each dataset, split them all with scikitlearn's train_test_split, then iterate over all 15 datasets per epoch. Under normal circumstances, the datasets would be shuffled as well, but I cannot figure out how to even do that since the data is too large to load all at once to shuffle (as such shuffling them is on the back burner for now).
Here's what my code looks like for one dataset.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import sequence
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection imporrt train_test_split
arr = np.load ('source/dir/dataset1.npy', allow_pickle = True, fix_imports = True)
arr[arr == -inf = -9999]
rehape = arr.reshape(((arr.shape[0])*(arr.shape[1])), (arr.shape[2]))
drop = reshape[~np.all(reshape == -9999, axis = 1)]
#additional work done with -9999 here
truth = drop[:,46]
data = drop[:,0:45]
#callbacks deleted in code sample
encoder = LabelEncoder()
encoder.fit(truth)
Y = encoder.transform(truth)
Y = Y.reshape(10486147, 1)
X = data.reshape(10486147, 45, 3)
seed = 7
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.33, random_state = seed)
model = sequential()
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3)))
model.add(Dense(1, kernel_initializer = 'normal', activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 500, batch_size = 1000, callbacks = [deleted callbacks])
So that makes sense for one dataset, but as I've said before I've got 15 datasets to iterate through, and I don't think retraining on new data is the right step.
Is there a way through dataset1.npy through dataset15.npy while properly splitting the ground truth as well.
Any suggestions?
python keras scikit-learn tensorflow stacked-lstm
$endgroup$
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago
add a comment |
$begingroup$
I've got 15 different datasets at about 10GB each. Each dataset comes with a binary 2D ground truth (10486147ish, 1) that I pull from it. I'm trying to figure out how to load each dataset, split them all with scikitlearn's train_test_split, then iterate over all 15 datasets per epoch. Under normal circumstances, the datasets would be shuffled as well, but I cannot figure out how to even do that since the data is too large to load all at once to shuffle (as such shuffling them is on the back burner for now).
Here's what my code looks like for one dataset.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import sequence
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection imporrt train_test_split
arr = np.load ('source/dir/dataset1.npy', allow_pickle = True, fix_imports = True)
arr[arr == -inf = -9999]
rehape = arr.reshape(((arr.shape[0])*(arr.shape[1])), (arr.shape[2]))
drop = reshape[~np.all(reshape == -9999, axis = 1)]
#additional work done with -9999 here
truth = drop[:,46]
data = drop[:,0:45]
#callbacks deleted in code sample
encoder = LabelEncoder()
encoder.fit(truth)
Y = encoder.transform(truth)
Y = Y.reshape(10486147, 1)
X = data.reshape(10486147, 45, 3)
seed = 7
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.33, random_state = seed)
model = sequential()
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3)))
model.add(Dense(1, kernel_initializer = 'normal', activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 500, batch_size = 1000, callbacks = [deleted callbacks])
So that makes sense for one dataset, but as I've said before I've got 15 datasets to iterate through, and I don't think retraining on new data is the right step.
Is there a way through dataset1.npy through dataset15.npy while properly splitting the ground truth as well.
Any suggestions?
python keras scikit-learn tensorflow stacked-lstm
$endgroup$
I've got 15 different datasets at about 10GB each. Each dataset comes with a binary 2D ground truth (10486147ish, 1) that I pull from it. I'm trying to figure out how to load each dataset, split them all with scikitlearn's train_test_split, then iterate over all 15 datasets per epoch. Under normal circumstances, the datasets would be shuffled as well, but I cannot figure out how to even do that since the data is too large to load all at once to shuffle (as such shuffling them is on the back burner for now).
Here's what my code looks like for one dataset.
import numpy as np
import pandas as pd
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.preprocessing import sequence
from sklearn.preprocessing import LabelEncoder
from sklearn.model_selection imporrt train_test_split
arr = np.load ('source/dir/dataset1.npy', allow_pickle = True, fix_imports = True)
arr[arr == -inf = -9999]
rehape = arr.reshape(((arr.shape[0])*(arr.shape[1])), (arr.shape[2]))
drop = reshape[~np.all(reshape == -9999, axis = 1)]
#additional work done with -9999 here
truth = drop[:,46]
data = drop[:,0:45]
#callbacks deleted in code sample
encoder = LabelEncoder()
encoder.fit(truth)
Y = encoder.transform(truth)
Y = Y.reshape(10486147, 1)
X = data.reshape(10486147, 45, 3)
seed = 7
X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size = 0.33, random_state = seed)
model = sequential()
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3), return_sequences = True))
model.add(LSTM(units = 32, activation = relu, input_shape = (45, 3)))
model.add(Dense(1, kernel_initializer = 'normal', activation = 'sigmoid'))
model.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy'])
model.fit(X_train, y_train, validation_data = (X_test, y_test), epochs = 500, batch_size = 1000, callbacks = [deleted callbacks])
So that makes sense for one dataset, but as I've said before I've got 15 datasets to iterate through, and I don't think retraining on new data is the right step.
Is there a way through dataset1.npy through dataset15.npy while properly splitting the ground truth as well.
Any suggestions?
python keras scikit-learn tensorflow stacked-lstm
python keras scikit-learn tensorflow stacked-lstm
asked 1 hour ago
HexadecimalismHexadecimalism
162
162
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago
add a comment |
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f51043%2fsplitting-and-training-multiple-datasets-at-the-same-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f51043%2fsplitting-and-training-multiple-datasets-at-the-same-time%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Is an interesting problem. A possibility is to train models by parts and then summarize them. Some models allow this separation.
$endgroup$
– Juan Esteban de la Calle
31 mins ago
$begingroup$
I didn't understand if you'll use them all at the same time or each dataset has a different usage? but if your datasets have the same shapes, you can gether them into one dataset
$endgroup$
– Kikio
30 mins ago