Training a LSTM on a time serie containing multiple inputs for each timestep
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
New contributor
$endgroup$
add a comment |
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
New contributor
$endgroup$
add a comment |
$begingroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
New contributor
$endgroup$
I am trying to train a LSTM in order to use it for forecasting : the problem is basically a multivariate multi-steps time series problem.
It is simply an experiment to see how statistical models (ARIMA, Holts-Winters, ...) and neural networks compare for a given problem.
As my dataset is perfectly fit for a statistical model, I am having trouble when trying to format it to train the LSTM as I have multiple entries for one timestep (corresponding to different entities) and I don't really know how to deal with it since the sequence is no longer tied by the time of observation. Let's say my dataset looks like the following example :
time | ent | obs
1 --- 1 ------ 5
2 --- 1 ------ 6
2 --- 5 ------ 1
3 --- 2 ------ 7
3 --- 5 ------ 4
As you can see, not every entity have an entry for any given time, and one timestep can have multiple entries.
I thought of training the LSTM for each entity but I would have too few data for most of them. Some threads gave me the idea to separate each entity into batches but the number of observations is not constant so it wouldn't work for me.
How do you think I am supposed to tackle this problem ?
time-series lstm preprocessing forecasting
time-series lstm preprocessing forecasting
New contributor
New contributor
New contributor
asked yesterday
naifmehnaifmeh
11
11
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
naifmeh is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48775%2ftraining-a-lstm-on-a-time-serie-containing-multiple-inputs-for-each-timestep%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
add a comment |
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
add a comment |
$begingroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
$endgroup$
The answer to this question highly depends on what relationship between the variables you are interested in.
If you are interested in the relationship between time and observation-value, treating the entities as different batches could make sense, under the assumption that the role of individual entities doesn't really matter to you. In this case, you would, for example, add the mean of each entity (or the overall mean) to all entities with missing values to get a constant number of observations per entity. But you could also simply average all values in each timestamp and include other features as min & max. This would most probably deliver better results.
If you are interested in the relationship between entities and observation-value, this is a matter of missing data in time series. There are a lot of techniques that can help you with that from simply imputing the mean to more sophisticated methods like a Kalman filter. However, in the end, you will have to ask yourself why these observations are missing and choose the appropriate method. But since you are using time-dependent models in your experiment, I assume, this is not of interest to you.
If you are interested in the interrelationship of all three variables, you are dealing with panel data. In this case, I don't see a reasonable possibility to model this with an LSTM. Maybe another RNN-architecture could work, however, the only paper I found was Tensorial Recurrent Neural Networks for Longitudinal Data Analysis from Mingyuan et.al. But in the end, it would not matter, since an ARIMA-model also isn't appropriate for panel data. Usually, you use a Difference-In-Differences approach for that kind of data. In this case, I would suggest changing the dataset for your experiment.
edited yesterday
answered yesterday
georg_ungeorg_un
836
836
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
add a comment |
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
$begingroup$
Thanks for your answer! The observations of my data are actually sales, and I also have some exogenous data along, so I'm not truly certain I could just input the missing data for a batch as some of the entities (stores), may have opened on a later date and have little to no history available. So my interest lays in the interrelationship of all the possible variables I have. The paper you linked to seems interesting: it's implementation doesn't seem so easy but worth a try :)
$endgroup$
– naifmeh
19 hours ago
add a comment |
naifmeh is a new contributor. Be nice, and check out our Code of Conduct.
naifmeh is a new contributor. Be nice, and check out our Code of Conduct.
naifmeh is a new contributor. Be nice, and check out our Code of Conduct.
naifmeh is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f48775%2ftraining-a-lstm-on-a-time-serie-containing-multiple-inputs-for-each-timestep%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown