Alternative to chi2 test in model comparison
$begingroup$
I have three curves ( 1.> observation: yobs , 2.> theory-1: yth1 , 3.> theory-2: yth2 ). All of these curves are functions of a single variable (say variable x.) From a computational perspective, all these curves can be thought of as arrays with discrete values. Alongside these curves, I also have 100 simulations of observations. I use these simulations to get error bars around yobs.
Below is a schematic diagram of yobs, yth1 and yth2. The orange shaded region around yobs shows error bars gotten from 100 simulations.
I want to get quantitative comparisons of the two theory functions (yth1 and yth2) with the observation curve yobs within the fitting region [x1, x2]
. The main aim of doing this is to get a quantitative idea of which theory (yth1 or yth2) matches better with yobs.
One way of doing that is through the use of analysis. This is shown in the two formulae given below. In the two formulae given below, the symbol denotes covariance matrix obtained from the 100 simulations. However, for a variety of reasons, the values that I get for comparisons are very big (~100). Because of this, I want to find methods other than analysis to find which theory (yth1 or yth2) matches better with observation ( yobs ).
One alternative would be the use of fractional errors in a manner as shown in the two equations given below. But, these methods do not use errors from simulations. So, I am not sure how much I can trust the method of fractional errors to find which theory matches better with observation.
Given the nature of my problem, what is the best statistical method to find which of the theories (yth1 or yth2) matches better with observation ( yobs )?
model-selection
New contributor
$endgroup$
add a comment |
$begingroup$
I have three curves ( 1.> observation: yobs , 2.> theory-1: yth1 , 3.> theory-2: yth2 ). All of these curves are functions of a single variable (say variable x.) From a computational perspective, all these curves can be thought of as arrays with discrete values. Alongside these curves, I also have 100 simulations of observations. I use these simulations to get error bars around yobs.
Below is a schematic diagram of yobs, yth1 and yth2. The orange shaded region around yobs shows error bars gotten from 100 simulations.
I want to get quantitative comparisons of the two theory functions (yth1 and yth2) with the observation curve yobs within the fitting region [x1, x2]
. The main aim of doing this is to get a quantitative idea of which theory (yth1 or yth2) matches better with yobs.
One way of doing that is through the use of analysis. This is shown in the two formulae given below. In the two formulae given below, the symbol denotes covariance matrix obtained from the 100 simulations. However, for a variety of reasons, the values that I get for comparisons are very big (~100). Because of this, I want to find methods other than analysis to find which theory (yth1 or yth2) matches better with observation ( yobs ).
One alternative would be the use of fractional errors in a manner as shown in the two equations given below. But, these methods do not use errors from simulations. So, I am not sure how much I can trust the method of fractional errors to find which theory matches better with observation.
Given the nature of my problem, what is the best statistical method to find which of the theories (yth1 or yth2) matches better with observation ( yobs )?
model-selection
New contributor
$endgroup$
add a comment |
$begingroup$
I have three curves ( 1.> observation: yobs , 2.> theory-1: yth1 , 3.> theory-2: yth2 ). All of these curves are functions of a single variable (say variable x.) From a computational perspective, all these curves can be thought of as arrays with discrete values. Alongside these curves, I also have 100 simulations of observations. I use these simulations to get error bars around yobs.
Below is a schematic diagram of yobs, yth1 and yth2. The orange shaded region around yobs shows error bars gotten from 100 simulations.
I want to get quantitative comparisons of the two theory functions (yth1 and yth2) with the observation curve yobs within the fitting region [x1, x2]
. The main aim of doing this is to get a quantitative idea of which theory (yth1 or yth2) matches better with yobs.
One way of doing that is through the use of analysis. This is shown in the two formulae given below. In the two formulae given below, the symbol denotes covariance matrix obtained from the 100 simulations. However, for a variety of reasons, the values that I get for comparisons are very big (~100). Because of this, I want to find methods other than analysis to find which theory (yth1 or yth2) matches better with observation ( yobs ).
One alternative would be the use of fractional errors in a manner as shown in the two equations given below. But, these methods do not use errors from simulations. So, I am not sure how much I can trust the method of fractional errors to find which theory matches better with observation.
Given the nature of my problem, what is the best statistical method to find which of the theories (yth1 or yth2) matches better with observation ( yobs )?
model-selection
New contributor
$endgroup$
I have three curves ( 1.> observation: yobs , 2.> theory-1: yth1 , 3.> theory-2: yth2 ). All of these curves are functions of a single variable (say variable x.) From a computational perspective, all these curves can be thought of as arrays with discrete values. Alongside these curves, I also have 100 simulations of observations. I use these simulations to get error bars around yobs.
Below is a schematic diagram of yobs, yth1 and yth2. The orange shaded region around yobs shows error bars gotten from 100 simulations.
I want to get quantitative comparisons of the two theory functions (yth1 and yth2) with the observation curve yobs within the fitting region [x1, x2]
. The main aim of doing this is to get a quantitative idea of which theory (yth1 or yth2) matches better with yobs.
One way of doing that is through the use of analysis. This is shown in the two formulae given below. In the two formulae given below, the symbol denotes covariance matrix obtained from the 100 simulations. However, for a variety of reasons, the values that I get for comparisons are very big (~100). Because of this, I want to find methods other than analysis to find which theory (yth1 or yth2) matches better with observation ( yobs ).
One alternative would be the use of fractional errors in a manner as shown in the two equations given below. But, these methods do not use errors from simulations. So, I am not sure how much I can trust the method of fractional errors to find which theory matches better with observation.
Given the nature of my problem, what is the best statistical method to find which of the theories (yth1 or yth2) matches better with observation ( yobs )?
model-selection
model-selection
New contributor
New contributor
New contributor
asked 4 mins ago
Siddharth SatpathySiddharth Satpathy
1011
1011
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44675%2falternative-to-chi2-test-in-model-comparison%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
Siddharth Satpathy is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44675%2falternative-to-chi2-test-in-model-comparison%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown