Adding a custom constraint to weighted least squares regression model
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
add a comment |
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
$begingroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
$endgroup$
I am trying to run a weighted least squares model that looks something like this (but could be different):
$y = beta_0 + beta_1 x + beta_2 log(x) + epsilon$
with weights $w_1, w_2, ..$
However, I know, from external knowledge, that whatever the model the outcome must asymptotically converge to a constant for large values of $x$. How can I get an OLS estimate with this constraint.
As an example, let's say if I knew the asymptote $c$, then I can add two fake data points to my model, with very high values of $x$ and very high weights $w$ and $y=c$, and run the normal WLS model and it would give me what I need - except I don't know the value of $c$. Is there a way to impose this constraint - maybe through adding a custom error term to the model?
python regression linear-regression
python regression linear-regression
edited 9 mins ago
ste_kwr
asked 5 hours ago
ste_kwrste_kwr
1063
1063
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
The model you are looking for is this:
$Y=frac{A}{1+e^{-(beta_0+beta_1x+beta_2log(x))}}$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49428%2fadding-a-custom-constraint-to-weighted-least-squares-regression-model%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
The model you are looking for is this:
$Y=frac{A}{1+e^{-(beta_0+beta_1x+beta_2log(x))}}$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
$begingroup$
The model you are looking for is this:
$Y=frac{A}{1+e^{-(beta_0+beta_1x+beta_2log(x))}}$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
add a comment |
$begingroup$
The model you are looking for is this:
$Y=frac{A}{1+e^{-(beta_0+beta_1x+beta_2log(x))}}$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
$endgroup$
The model you are looking for is this:
$Y=frac{A}{1+e^{-(beta_0+beta_1x+beta_2log(x))}}$, this could not be obtained but a very similar was obtained.
This code in R might work:
R=data.frame(X=c(1,2,3,4,5,6,7,8,9),Y=c(1,2,3,3,3,3,3,3,3)) # Data in which X is a line, and Y has an still unknown limit.
model=nls(formula = Y~A/(1+exp(-(b0+b1*X))),data=R)
summary(E)
In the result you can see how $A$ says that the limit of 3 (previously unknown) is calculated.
There is a limitation to take into account, is explained in this link, is summarized in the impossibility for all possible models to exist, the "most inside" model should be linear.
The model $beta_0+beta_1x+beta_2log(x)$ could not be used, the model $beta_0+beta_1x$ could be used, take this into account.
First steps with Non-Linear Regression in R
Singular Gradient Error in nls with correct starting values
New contributor
New contributor
answered 3 hours ago
Juan Esteban de la CalleJuan Esteban de la Calle
36011
36011
New contributor
New contributor
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f49428%2fadding-a-custom-constraint-to-weighted-least-squares-regression-model%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Perhaps solving the equation $y=max(beta_0+beta_1x+beta_2*log(x),c)+epsilon$ instead of the original? you will have to add artificial points to the data with $(x_{large},c)$
$endgroup$
– Juan Esteban de la Calle
5 hours ago
$begingroup$
I don't know the value of $c$, so somehow I imagine the loss function would need to take care of this. If I knew $c$, I have described in the question how I would go about doing this.
$endgroup$
– ste_kwr
5 hours ago
$begingroup$
Maybe you can try to fit something like a modified logit model. I have never tried something liike this and I don't know anything about a possible implementation, but a logit regression has a natural limit of $1$, you may work with a unknown limit. The equation would be like this: $Y=frac{c}{(1+e^{-(beta_0+beta_1x+beta_2log(x))})}$
$endgroup$
– Juan Esteban de la Calle
4 hours ago