Variance of average of $n$ correlated random variables












5












$begingroup$


Reading about deep leaning, I came across the following formula.



$$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$



where $X_1, dots, X_n$ are identically distributed random variables with
pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.




  1. How to derive this?

  2. How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?










share|cite|improve this question









New contributor




OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.







$endgroup$

















    5












    $begingroup$


    Reading about deep leaning, I came across the following formula.



    $$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$



    where $X_1, dots, X_n$ are identically distributed random variables with
    pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.




    1. How to derive this?

    2. How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?










    share|cite|improve this question









    New contributor




    OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
    Check out our Code of Conduct.







    $endgroup$















      5












      5








      5


      1



      $begingroup$


      Reading about deep leaning, I came across the following formula.



      $$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$



      where $X_1, dots, X_n$ are identically distributed random variables with
      pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.




      1. How to derive this?

      2. How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?










      share|cite|improve this question









      New contributor




      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.







      $endgroup$




      Reading about deep leaning, I came across the following formula.



      $$ mbox{var} left( frac{1}{n} sum_{i=1}^{n} X_i right) = rho sigma^2 + frac{1-rho}{n} sigma^2 $$



      where $X_1, dots, X_n$ are identically distributed random variables with
      pairwise correlation $rho > 0$ and variance $mbox{var}(X_i) = sigma^2$.




      1. How to derive this?

      2. How does bootstrap aggregating alleviate the effect of overfitting, according to this formula? What is the relationsip?







      machine-learning deep-learning bootstrap regularization bagging






      share|cite|improve this question









      New contributor




      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.











      share|cite|improve this question









      New contributor




      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      share|cite|improve this question




      share|cite|improve this question








      edited 8 hours ago









      Rodrigo de Azevedo

      728513




      728513






      New contributor




      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.









      asked 11 hours ago









      OmegaDOmegaD

      485




      485




      New contributor




      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.





      New contributor





      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






      OmegaD is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
      Check out our Code of Conduct.






















          1 Answer
          1






          active

          oldest

          votes


















          5












          $begingroup$

          By definition, we have



          $$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$



          which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:



          $$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$



          Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
            $endgroup$
            – OmegaD
            10 hours ago






          • 1




            $begingroup$
            @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
            $endgroup$
            – gunes
            10 hours ago












          • $begingroup$
            Fantastic! Thank you so much!
            $endgroup$
            – OmegaD
            10 hours ago











          Your Answer





          StackExchange.ifUsing("editor", function () {
          return StackExchange.using("mathjaxEditing", function () {
          StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
          StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
          });
          });
          }, "mathjax-editing");

          StackExchange.ready(function() {
          var channelOptions = {
          tags: "".split(" "),
          id: "65"
          };
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function() {
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled) {
          StackExchange.using("snippets", function() {
          createEditor();
          });
          }
          else {
          createEditor();
          }
          });

          function createEditor() {
          StackExchange.prepareEditor({
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: false,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: null,
          bindNavPrevention: true,
          postfix: "",
          imageUploader: {
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          },
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          });


          }
          });






          OmegaD is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f391740%2fvariance-of-average-of-n-correlated-random-variables%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          5












          $begingroup$

          By definition, we have



          $$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$



          which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:



          $$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$



          Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
            $endgroup$
            – OmegaD
            10 hours ago






          • 1




            $begingroup$
            @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
            $endgroup$
            – gunes
            10 hours ago












          • $begingroup$
            Fantastic! Thank you so much!
            $endgroup$
            – OmegaD
            10 hours ago
















          5












          $begingroup$

          By definition, we have



          $$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$



          which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:



          $$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$



          Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.






          share|cite|improve this answer











          $endgroup$













          • $begingroup$
            Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
            $endgroup$
            – OmegaD
            10 hours ago






          • 1




            $begingroup$
            @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
            $endgroup$
            – gunes
            10 hours ago












          • $begingroup$
            Fantastic! Thank you so much!
            $endgroup$
            – OmegaD
            10 hours ago














          5












          5








          5





          $begingroup$

          By definition, we have



          $$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$



          which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:



          $$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$



          Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.






          share|cite|improve this answer











          $endgroup$



          By definition, we have



          $$operatorname{var}left(sum_{i=1}^n{X_i}right)=operatorname{cov}left(sum_{i=1}^n{X_i},sum_{i=1}^n{X_i}right)=sum_{i=1}^n{operatorname{var}(X_i)}+sum_{ineq j}operatorname{cov}(X_i,X_j)$$



          which is $n operatorname{var}(X_i)+n(n-1)operatorname{cov}(X_i,X_j)=nsigma^2+n(n-1)rhosigma^2$, where $ineq j$. Substituting this into the original equation yields the following:



          $$operatorname{var}left(frac{1}{n}sum_{i=1}^nX_iright)=frac{1}{n^2}(nsigma^2+n(n-1)rhosigma^2)=rhosigma^2+frac{1-rho}{n}sigma^2$$



          Each $X_i$ can be thought of as a single decision mechanism, call it DM, (e.g. regressor). The variance of your decision was $sigma^2$. By using bootstrap samples and aggregating your DMs' outputs, you end up with a decision variance as above, which is strictly smaller than $sigma^2$ when $rho neq 1$ and $nneq 1$. DMs will have some degree of correlation of course, since they are trained over bootstrap samples obtained from the same base dataset, but the correlation between them most probably won't be equal to $1$. Overfitted mechanisms in general have large variance, so by aiming to decrease the variance of your DM, you actually address the problem of overfitting implicitly.







          share|cite|improve this answer














          share|cite|improve this answer



          share|cite|improve this answer








          edited 7 hours ago









          StubbornAtom

          2,4901431




          2,4901431










          answered 10 hours ago









          gunesgunes

          4,4961112




          4,4961112












          • $begingroup$
            Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
            $endgroup$
            – OmegaD
            10 hours ago






          • 1




            $begingroup$
            @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
            $endgroup$
            – gunes
            10 hours ago












          • $begingroup$
            Fantastic! Thank you so much!
            $endgroup$
            – OmegaD
            10 hours ago


















          • $begingroup$
            Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
            $endgroup$
            – OmegaD
            10 hours ago






          • 1




            $begingroup$
            @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
            $endgroup$
            – gunes
            10 hours ago












          • $begingroup$
            Fantastic! Thank you so much!
            $endgroup$
            – OmegaD
            10 hours ago
















          $begingroup$
          Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
          $endgroup$
          – OmegaD
          10 hours ago




          $begingroup$
          Fantastic! Thank you for so much for your answer. Quick question, in the term $n var(X_i) + n(n-1) cov(X_i,X_j)$ n and n-1 come from. Sorry if it is too obvious question.
          $endgroup$
          – OmegaD
          10 hours ago




          1




          1




          $begingroup$
          @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
          $endgroup$
          – gunes
          10 hours ago






          $begingroup$
          @OmegaD There are $n^2$ pairs of $i,j$, where $n$ of them have $i=j$, and $n^2-n=n(n-1)$ of them have $ineq j$.
          $endgroup$
          – gunes
          10 hours ago














          $begingroup$
          Fantastic! Thank you so much!
          $endgroup$
          – OmegaD
          10 hours ago




          $begingroup$
          Fantastic! Thank you so much!
          $endgroup$
          – OmegaD
          10 hours ago










          OmegaD is a new contributor. Be nice, and check out our Code of Conduct.










          draft saved

          draft discarded


















          OmegaD is a new contributor. Be nice, and check out our Code of Conduct.













          OmegaD is a new contributor. Be nice, and check out our Code of Conduct.












          OmegaD is a new contributor. Be nice, and check out our Code of Conduct.
















          Thanks for contributing an answer to Cross Validated!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid



          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.


          Use MathJax to format equations. MathJax reference.


          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function () {
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstats.stackexchange.com%2fquestions%2f391740%2fvariance-of-average-of-n-correlated-random-variables%23new-answer', 'question_page');
          }
          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          How to label and detect the document text images

          Vallis Paradisi

          Tabula Rosettana