Generating a set of different scenarios based on some initial observations












0












$begingroup$


I have a in my hands 3 different time series which model 3 different scenarios (base, downside, upside). Every of this time-series depends on a set of 11 different attributes, which take values for different time intervals. Most of the different features of the input are highly correlated. There is also a (cdf) probability function which defines how probably every scenario is (every quintile), for every point in time.
In my case, I want to create more input data based on the current observations and create different time-series/ simulations. For example, let’s take the case of the base scenario. The first idea I had was to calculate the Covariance matrix and the mean value of the different data points through time and then just draw samples from a multivariate normal distribution. But this obviously is not correct because even if the correlation between the different attributes is preserved, some of the properties of the time series are not. If I just draw random points and assign them at the different time intervals, some of these attributes can become wiggle. And for example, if one attribute is kind of similar with let’s say ‘GDP’ then it does not make sense to fluctuate for short-periods of time.



At this point I have think couple of ways how to deal with this problem but I have not come up with a complete analytical solution. Someone else had attempted to take a look before. He just basically defined normal distributions for each point in time using the values of the attributes for the 3 scenarios and the cdf. For example if the value of an attribute is [1.25, 1.5, 2] for every scenario respectively and for some values of the cdf, these points can be used in order to create a distribution and then sample different points. But, with this way all the new times series have the form of the base one (same fluctuation) and they are concentrated close to this series.



At this point I mostly interested in a simple approach on this problem that will give some reasonable results. It would be great if you could give some advice or any kind of reference to look for.










share|improve this question









$endgroup$




bumped to the homepage by Community yesterday


This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.




















    0












    $begingroup$


    I have a in my hands 3 different time series which model 3 different scenarios (base, downside, upside). Every of this time-series depends on a set of 11 different attributes, which take values for different time intervals. Most of the different features of the input are highly correlated. There is also a (cdf) probability function which defines how probably every scenario is (every quintile), for every point in time.
    In my case, I want to create more input data based on the current observations and create different time-series/ simulations. For example, let’s take the case of the base scenario. The first idea I had was to calculate the Covariance matrix and the mean value of the different data points through time and then just draw samples from a multivariate normal distribution. But this obviously is not correct because even if the correlation between the different attributes is preserved, some of the properties of the time series are not. If I just draw random points and assign them at the different time intervals, some of these attributes can become wiggle. And for example, if one attribute is kind of similar with let’s say ‘GDP’ then it does not make sense to fluctuate for short-periods of time.



    At this point I have think couple of ways how to deal with this problem but I have not come up with a complete analytical solution. Someone else had attempted to take a look before. He just basically defined normal distributions for each point in time using the values of the attributes for the 3 scenarios and the cdf. For example if the value of an attribute is [1.25, 1.5, 2] for every scenario respectively and for some values of the cdf, these points can be used in order to create a distribution and then sample different points. But, with this way all the new times series have the form of the base one (same fluctuation) and they are concentrated close to this series.



    At this point I mostly interested in a simple approach on this problem that will give some reasonable results. It would be great if you could give some advice or any kind of reference to look for.










    share|improve this question









    $endgroup$




    bumped to the homepage by Community yesterday


    This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.


















      0












      0








      0





      $begingroup$


      I have a in my hands 3 different time series which model 3 different scenarios (base, downside, upside). Every of this time-series depends on a set of 11 different attributes, which take values for different time intervals. Most of the different features of the input are highly correlated. There is also a (cdf) probability function which defines how probably every scenario is (every quintile), for every point in time.
      In my case, I want to create more input data based on the current observations and create different time-series/ simulations. For example, let’s take the case of the base scenario. The first idea I had was to calculate the Covariance matrix and the mean value of the different data points through time and then just draw samples from a multivariate normal distribution. But this obviously is not correct because even if the correlation between the different attributes is preserved, some of the properties of the time series are not. If I just draw random points and assign them at the different time intervals, some of these attributes can become wiggle. And for example, if one attribute is kind of similar with let’s say ‘GDP’ then it does not make sense to fluctuate for short-periods of time.



      At this point I have think couple of ways how to deal with this problem but I have not come up with a complete analytical solution. Someone else had attempted to take a look before. He just basically defined normal distributions for each point in time using the values of the attributes for the 3 scenarios and the cdf. For example if the value of an attribute is [1.25, 1.5, 2] for every scenario respectively and for some values of the cdf, these points can be used in order to create a distribution and then sample different points. But, with this way all the new times series have the form of the base one (same fluctuation) and they are concentrated close to this series.



      At this point I mostly interested in a simple approach on this problem that will give some reasonable results. It would be great if you could give some advice or any kind of reference to look for.










      share|improve this question









      $endgroup$




      I have a in my hands 3 different time series which model 3 different scenarios (base, downside, upside). Every of this time-series depends on a set of 11 different attributes, which take values for different time intervals. Most of the different features of the input are highly correlated. There is also a (cdf) probability function which defines how probably every scenario is (every quintile), for every point in time.
      In my case, I want to create more input data based on the current observations and create different time-series/ simulations. For example, let’s take the case of the base scenario. The first idea I had was to calculate the Covariance matrix and the mean value of the different data points through time and then just draw samples from a multivariate normal distribution. But this obviously is not correct because even if the correlation between the different attributes is preserved, some of the properties of the time series are not. If I just draw random points and assign them at the different time intervals, some of these attributes can become wiggle. And for example, if one attribute is kind of similar with let’s say ‘GDP’ then it does not make sense to fluctuate for short-periods of time.



      At this point I have think couple of ways how to deal with this problem but I have not come up with a complete analytical solution. Someone else had attempted to take a look before. He just basically defined normal distributions for each point in time using the values of the attributes for the 3 scenarios and the cdf. For example if the value of an attribute is [1.25, 1.5, 2] for every scenario respectively and for some values of the cdf, these points can be used in order to create a distribution and then sample different points. But, with this way all the new times series have the form of the base one (same fluctuation) and they are concentrated close to this series.



      At this point I mostly interested in a simple approach on this problem that will give some reasonable results. It would be great if you could give some advice or any kind of reference to look for.







      python time-series data-science-model sampling distribution






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Oct 12 '18 at 13:54









      DimitsDimits

      1




      1





      bumped to the homepage by Community yesterday


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.







      bumped to the homepage by Community yesterday


      This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
























          1 Answer
          1






          active

          oldest

          votes


















          0












          $begingroup$

          You could do block bootstrapping. For example GDP was 100, 150, 300, 100, 130. In percentage it is +50%, +100%, -67%, +30%. You choose blocks of two. So you have two blocks (+50%, +100%) and (-67%, +30%). So you take random blocks for your scenario. Possible variants:




          1. +50%, +100%, -67%, +30%

          2. +50%, +100%, +50%, +100%

          3. -67%, +30%, -67%, +30%

          4. -67%, +30%, +50%, +100%


          The blocks allow you to hold autocorrelation in the data and if one block includes all variables then all correlations will also remain the same.



          An explanation on how to do block bootstrapping on Cross Validated StackExchange site:
          https://stats.stackexchange.com/questions/25706/how-do-you-do-bootstrapping-with-time-series-data






          share|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
            $endgroup$
            – Dimits
            Oct 13 '18 at 15:31












          Your Answer








          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "557"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });














          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f39586%2fgenerating-a-set-of-different-scenarios-based-on-some-initial-observations%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0












          $begingroup$

          You could do block bootstrapping. For example GDP was 100, 150, 300, 100, 130. In percentage it is +50%, +100%, -67%, +30%. You choose blocks of two. So you have two blocks (+50%, +100%) and (-67%, +30%). So you take random blocks for your scenario. Possible variants:




          1. +50%, +100%, -67%, +30%

          2. +50%, +100%, +50%, +100%

          3. -67%, +30%, -67%, +30%

          4. -67%, +30%, +50%, +100%


          The blocks allow you to hold autocorrelation in the data and if one block includes all variables then all correlations will also remain the same.



          An explanation on how to do block bootstrapping on Cross Validated StackExchange site:
          https://stats.stackexchange.com/questions/25706/how-do-you-do-bootstrapping-with-time-series-data






          share|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
            $endgroup$
            – Dimits
            Oct 13 '18 at 15:31
















          0












          $begingroup$

          You could do block bootstrapping. For example GDP was 100, 150, 300, 100, 130. In percentage it is +50%, +100%, -67%, +30%. You choose blocks of two. So you have two blocks (+50%, +100%) and (-67%, +30%). So you take random blocks for your scenario. Possible variants:




          1. +50%, +100%, -67%, +30%

          2. +50%, +100%, +50%, +100%

          3. -67%, +30%, -67%, +30%

          4. -67%, +30%, +50%, +100%


          The blocks allow you to hold autocorrelation in the data and if one block includes all variables then all correlations will also remain the same.



          An explanation on how to do block bootstrapping on Cross Validated StackExchange site:
          https://stats.stackexchange.com/questions/25706/how-do-you-do-bootstrapping-with-time-series-data






          share|improve this answer









          $endgroup$













          • $begingroup$
            Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
            $endgroup$
            – Dimits
            Oct 13 '18 at 15:31














          0












          0








          0





          $begingroup$

          You could do block bootstrapping. For example GDP was 100, 150, 300, 100, 130. In percentage it is +50%, +100%, -67%, +30%. You choose blocks of two. So you have two blocks (+50%, +100%) and (-67%, +30%). So you take random blocks for your scenario. Possible variants:




          1. +50%, +100%, -67%, +30%

          2. +50%, +100%, +50%, +100%

          3. -67%, +30%, -67%, +30%

          4. -67%, +30%, +50%, +100%


          The blocks allow you to hold autocorrelation in the data and if one block includes all variables then all correlations will also remain the same.



          An explanation on how to do block bootstrapping on Cross Validated StackExchange site:
          https://stats.stackexchange.com/questions/25706/how-do-you-do-bootstrapping-with-time-series-data






          share|improve this answer









          $endgroup$



          You could do block bootstrapping. For example GDP was 100, 150, 300, 100, 130. In percentage it is +50%, +100%, -67%, +30%. You choose blocks of two. So you have two blocks (+50%, +100%) and (-67%, +30%). So you take random blocks for your scenario. Possible variants:




          1. +50%, +100%, -67%, +30%

          2. +50%, +100%, +50%, +100%

          3. -67%, +30%, -67%, +30%

          4. -67%, +30%, +50%, +100%


          The blocks allow you to hold autocorrelation in the data and if one block includes all variables then all correlations will also remain the same.



          An explanation on how to do block bootstrapping on Cross Validated StackExchange site:
          https://stats.stackexchange.com/questions/25706/how-do-you-do-bootstrapping-with-time-series-data







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Oct 12 '18 at 20:29









          keiv.flykeiv.fly

          607110




          607110












          • $begingroup$
            Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
            $endgroup$
            – Dimits
            Oct 13 '18 at 15:31


















          • $begingroup$
            Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
            $endgroup$
            – Dimits
            Oct 13 '18 at 15:31
















          $begingroup$
          Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
          $endgroup$
          – Dimits
          Oct 13 '18 at 15:31




          $begingroup$
          Thanks a lot for taking time to answer my question. I will definitely have a look and it looks promising. I am worrying mostly because there are many different attributes as input (11) and how can I generalize it for all the attributes. But I will definitely have a look.
          $endgroup$
          – Dimits
          Oct 13 '18 at 15:31


















          draft saved

          draft discarded




















































          Thanks for contributing an answer to Data Science Stack Exchange!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f39586%2fgenerating-a-set-of-different-scenarios-based-on-some-initial-observations%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to label and detect the document text images

          Tabula Rosettana

          Aureus (color)