Single machine learning algorithm for multiple classes of data : One hot encoder
$begingroup$
I have data of the following kind:
x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16
Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1
in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5
and predict(x1=1, x2=5) = 20
. My actual problem has multiple values of x1
.
To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.
import pandas as pd
from sklearn.linear_model import LinearRegression
# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected
# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected
df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected
# Combine the two data frames x1 = 0 and x1 = 1
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25
# use one hot encoder
df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25
How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?
machine-learning python scikit-learn
$endgroup$
bumped to the homepage by Community♦ 3 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
|
show 3 more comments
$begingroup$
I have data of the following kind:
x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16
Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1
in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5
and predict(x1=1, x2=5) = 20
. My actual problem has multiple values of x1
.
To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.
import pandas as pd
from sklearn.linear_model import LinearRegression
# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected
# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected
df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected
# Combine the two data frames x1 = 0 and x1 = 1
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25
# use one hot encoder
df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25
How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?
machine-learning python scikit-learn
$endgroup$
bumped to the homepage by Community♦ 3 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
$begingroup$
Welcome todatascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must giveslope = 1
whenx1=0
andslope=4
whenx1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45
|
show 3 more comments
$begingroup$
I have data of the following kind:
x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16
Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1
in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5
and predict(x1=1, x2=5) = 20
. My actual problem has multiple values of x1
.
To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.
import pandas as pd
from sklearn.linear_model import LinearRegression
# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected
# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected
df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected
# Combine the two data frames x1 = 0 and x1 = 1
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25
# use one hot encoder
df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25
How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?
machine-learning python scikit-learn
$endgroup$
I have data of the following kind:
x1 x2 y
0 0 1 1
1 0 2 2
2 0 3 3
3 0 4 4
4 1 1 4
5 1 2 8
6 1 3 12
7 1 4 16
Is it possible to construct a single machine learning algorithm in python/scikit-learn by defining column x1
in such a way that a simple linear regression should give predict(x1=0, x2=5) = 5
and predict(x1=1, x2=5) = 20
. My actual problem has multiple values of x1
.
To illustrate the problem better: I have the following code with one hot encoder and it doesn't seem to give the accuracy of training the data separately.
import pandas as pd
from sklearn.linear_model import LinearRegression
# Dataframe with x1 = 0 and linear regression gives a slope of 1 as expected
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 5 as expected
# Dataframe with x1 = 1 and linear regression gives a slope of 5 as expected
df = pd.DataFrame(data=[{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 5]]))) # Output is 20 as expected
# Combine the two data frames x1 = 0 and x1 = 1
df = pd.DataFrame(data=[{'x1': 0, 'x2': 1, 'y': 1},
{'x1': 0, 'x2': 2, 'y': 2},
{'x1': 0, 'x2': 3, 'y': 3},
{'x1': 0, 'x2': 4, 'y': 4},
{'x1': 1, 'x2': 1, 'y': 4},
{'x1': 1, 'x2': 2, 'y': 8},
{'x1': 1, 'x2': 3, 'y': 12},
{'x1': 1, 'x2': 4, 'y': 16}
],
columns=['x1', 'x2', 'y'])
X = df[['x1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[1, 5]]))) # Output is 16.25
# use one hot encoder
df = pd.get_dummies(df, columns=["x1"], prefix=["x1"])
X = df[['x1_0', 'x1_1', 'x2']]
y = df['y']
reg = LinearRegression().fit(X, y)
print(reg.predict(np.array([[1, 0, 5]]))) # Output is 8.75
print(reg.predict(np.array([[0, 1, 5]]))) # Output is 16.25
How can I use pandas and sklearn for the combined data to get the same accuracy using one machine learning model?
machine-learning python scikit-learn
machine-learning python scikit-learn
edited Nov 24 '18 at 11:37
user3631804
asked Nov 23 '18 at 14:31
user3631804user3631804
11
11
bumped to the homepage by Community♦ 3 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 3 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
$begingroup$
Welcome todatascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must giveslope = 1
whenx1=0
andslope=4
whenx1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45
|
show 3 more comments
$begingroup$
Welcome todatascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html
$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must giveslope = 1
whenx1=0
andslope=4
whenx1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?
$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45
$begingroup$
Welcome to
datascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
Welcome to
datascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (
x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1
when x1=0
and slope=4
when x1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (
x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must give slope = 1
when x1=0
and slope=4
when x1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45
|
show 3 more comments
1 Answer
1
active
oldest
votes
$begingroup$
You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).
$endgroup$
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41606%2fsingle-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).
$endgroup$
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
add a comment |
$begingroup$
You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).
$endgroup$
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
add a comment |
$begingroup$
You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).
$endgroup$
You can have x1 as a categorical variable, convert it to dummy variables (one hot encoder) and then run linear regression (or any other algorithm).
answered Nov 23 '18 at 16:30
rnsornso
508317
508317
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
add a comment |
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
$begingroup$
Thank you. I used one hot encoder and that doesn't seem to give me the answer. I improved the question by providing pseudo-code. Can you please let me know if I did something wrong with the encoder?
$endgroup$
– user3631804
Nov 24 '18 at 10:20
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f41606%2fsingle-machine-learning-algorithm-for-multiple-classes-of-data-one-hot-encoder%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
$begingroup$
Welcome to
datascience
. This is one good link that may help you: scikit-learn.org/stable/tutorial/basic/tutorial.html$endgroup$
– rnso
Nov 23 '18 at 15:04
$begingroup$
@rnso Thank you for the link. My issue is not about setting up a simple regression problem using scikit-learn. It is more to do with how to handle a variable like (
x1
) which qualitatively changes the trend of the data. In the example I gave, the ML algorithm must giveslope = 1
whenx1=0
andslope=4
whenx1=1
. Is that possible to do with a single ML algorithm or breaking up the data into two training sets is the only alternative?$endgroup$
– user3631804
Nov 23 '18 at 15:39
$begingroup$
Probably you need mixed models as on: statsmodels.org/devel/mixed_linear.html
$endgroup$
– rnso
Nov 23 '18 at 16:15
$begingroup$
You should post some follow-up here. How did you solve your problem?
$endgroup$
– rnso
Nov 24 '18 at 8:07
$begingroup$
If x1 will have only 2 options then you can keep only one column (x1) for joint dataframe. The try to predict for (0,5) and (1,5). Post here the results.
$endgroup$
– rnso
Nov 24 '18 at 10:45