How to select random data for two different recommender systems?
$begingroup$
The business problem: We have two different vendors that offer personalized recommender engines and want to do A/B testing with them. The recommendation will give the user a personalized offer via a push message on the phone. During the testing period, we should give each provider a dataset with different details regarding the customers (purchase history, in-app events, etc). Each vendor will receive a dataset with identical info but from different clients.
What is the best method to choose the two datasets so that they would be similar in terms of client behaviour?
I assume that giving them random data from our database wouldn't be a rigorous method so one idea that I have in mind is applying dbScan clustering on our database and further randomly picking clients from each cluster - I don't know if this is the best approach. The full database has 200k clients and each dataset should contain 5k clients.
Example: After dbScan clustering there are k=10
clusters so I randomly pick elements from each cluster and split them into Dataset01 and Dataset02.
Any suggestions?
dataset statistics recommender-system ab-test
New contributor
$endgroup$
add a comment |
$begingroup$
The business problem: We have two different vendors that offer personalized recommender engines and want to do A/B testing with them. The recommendation will give the user a personalized offer via a push message on the phone. During the testing period, we should give each provider a dataset with different details regarding the customers (purchase history, in-app events, etc). Each vendor will receive a dataset with identical info but from different clients.
What is the best method to choose the two datasets so that they would be similar in terms of client behaviour?
I assume that giving them random data from our database wouldn't be a rigorous method so one idea that I have in mind is applying dbScan clustering on our database and further randomly picking clients from each cluster - I don't know if this is the best approach. The full database has 200k clients and each dataset should contain 5k clients.
Example: After dbScan clustering there are k=10
clusters so I randomly pick elements from each cluster and split them into Dataset01 and Dataset02.
Any suggestions?
dataset statistics recommender-system ab-test
New contributor
$endgroup$
add a comment |
$begingroup$
The business problem: We have two different vendors that offer personalized recommender engines and want to do A/B testing with them. The recommendation will give the user a personalized offer via a push message on the phone. During the testing period, we should give each provider a dataset with different details regarding the customers (purchase history, in-app events, etc). Each vendor will receive a dataset with identical info but from different clients.
What is the best method to choose the two datasets so that they would be similar in terms of client behaviour?
I assume that giving them random data from our database wouldn't be a rigorous method so one idea that I have in mind is applying dbScan clustering on our database and further randomly picking clients from each cluster - I don't know if this is the best approach. The full database has 200k clients and each dataset should contain 5k clients.
Example: After dbScan clustering there are k=10
clusters so I randomly pick elements from each cluster and split them into Dataset01 and Dataset02.
Any suggestions?
dataset statistics recommender-system ab-test
New contributor
$endgroup$
The business problem: We have two different vendors that offer personalized recommender engines and want to do A/B testing with them. The recommendation will give the user a personalized offer via a push message on the phone. During the testing period, we should give each provider a dataset with different details regarding the customers (purchase history, in-app events, etc). Each vendor will receive a dataset with identical info but from different clients.
What is the best method to choose the two datasets so that they would be similar in terms of client behaviour?
I assume that giving them random data from our database wouldn't be a rigorous method so one idea that I have in mind is applying dbScan clustering on our database and further randomly picking clients from each cluster - I don't know if this is the best approach. The full database has 200k clients and each dataset should contain 5k clients.
Example: After dbScan clustering there are k=10
clusters so I randomly pick elements from each cluster and split them into Dataset01 and Dataset02.
Any suggestions?
dataset statistics recommender-system ab-test
dataset statistics recommender-system ab-test
New contributor
New contributor
edited 4 hours ago
Shaido
2009
2009
New contributor
asked 19 hours ago
Remus RaphaelRemus Raphael
62
62
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Welcome @Remus Raphael :) - Your approach is a sound option.
More specifically, if a density-based algorithm was already working for you, I'd recommend the HDBSCAN clustering algo, which should have a better performance and has a unique built-in cluster validation (based on the DBCV algorithm).
Then your general pipe line could be:
- An optional pre-processing
- An optional NLP / TFIDF for meaningful text features
- An optional dimensionality reduction (I found TSNE and TruncatedSVD to
work nicely with HDBSCAN with textual processing) - HDBSCAN tuning for different params and distance metrics
- Finally, when you're satisfied with the clustering quality - you can simply start with the 2 largest clusters for your AB testing
I'd love to hear you feedback on your actual data :)
$endgroup$
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set themin_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size
$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
|
show 1 more comment
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Remus Raphael is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44317%2fhow-to-select-random-data-for-two-different-recommender-systems%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Welcome @Remus Raphael :) - Your approach is a sound option.
More specifically, if a density-based algorithm was already working for you, I'd recommend the HDBSCAN clustering algo, which should have a better performance and has a unique built-in cluster validation (based on the DBCV algorithm).
Then your general pipe line could be:
- An optional pre-processing
- An optional NLP / TFIDF for meaningful text features
- An optional dimensionality reduction (I found TSNE and TruncatedSVD to
work nicely with HDBSCAN with textual processing) - HDBSCAN tuning for different params and distance metrics
- Finally, when you're satisfied with the clustering quality - you can simply start with the 2 largest clusters for your AB testing
I'd love to hear you feedback on your actual data :)
$endgroup$
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set themin_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size
$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
|
show 1 more comment
$begingroup$
Welcome @Remus Raphael :) - Your approach is a sound option.
More specifically, if a density-based algorithm was already working for you, I'd recommend the HDBSCAN clustering algo, which should have a better performance and has a unique built-in cluster validation (based on the DBCV algorithm).
Then your general pipe line could be:
- An optional pre-processing
- An optional NLP / TFIDF for meaningful text features
- An optional dimensionality reduction (I found TSNE and TruncatedSVD to
work nicely with HDBSCAN with textual processing) - HDBSCAN tuning for different params and distance metrics
- Finally, when you're satisfied with the clustering quality - you can simply start with the 2 largest clusters for your AB testing
I'd love to hear you feedback on your actual data :)
$endgroup$
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set themin_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size
$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
|
show 1 more comment
$begingroup$
Welcome @Remus Raphael :) - Your approach is a sound option.
More specifically, if a density-based algorithm was already working for you, I'd recommend the HDBSCAN clustering algo, which should have a better performance and has a unique built-in cluster validation (based on the DBCV algorithm).
Then your general pipe line could be:
- An optional pre-processing
- An optional NLP / TFIDF for meaningful text features
- An optional dimensionality reduction (I found TSNE and TruncatedSVD to
work nicely with HDBSCAN with textual processing) - HDBSCAN tuning for different params and distance metrics
- Finally, when you're satisfied with the clustering quality - you can simply start with the 2 largest clusters for your AB testing
I'd love to hear you feedback on your actual data :)
$endgroup$
Welcome @Remus Raphael :) - Your approach is a sound option.
More specifically, if a density-based algorithm was already working for you, I'd recommend the HDBSCAN clustering algo, which should have a better performance and has a unique built-in cluster validation (based on the DBCV algorithm).
Then your general pipe line could be:
- An optional pre-processing
- An optional NLP / TFIDF for meaningful text features
- An optional dimensionality reduction (I found TSNE and TruncatedSVD to
work nicely with HDBSCAN with textual processing) - HDBSCAN tuning for different params and distance metrics
- Finally, when you're satisfied with the clustering quality - you can simply start with the 2 largest clusters for your AB testing
I'd love to hear you feedback on your actual data :)
answered 18 hours ago
morkmork
26413
26413
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set themin_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size
$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
|
show 1 more comment
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set themin_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size
$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Thank you for your answer! Very detailed explanation! Do you think it would be best to 1) choose sample data from the 2 largest clusters OR 2) include data from each cluster to have diversity and equally put it in the 2 testing Groups?
$endgroup$
– Remus Raphael
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
Assuming your measure of success is client actual purchases or PPC/PPV. I assume the whole purpose of clustering is to try and make sure each vendor gets a different client segment (so you can measure their success?) - If so then stick with original plan. If not, and you have another way of tracking back a success to a vendor, then why not send them both the top N clusters and let them compete? (like kaggle.com does). BTW, why sampling? is there a limitation of dataset size from the vendors' side?
$endgroup$
– mork
18 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
We have a limitation on the dataset size from our side. We will send personalized push notifications. The goal is to have 2 different client groups that are similar in terms of profile, past transactions but also covers client diversity (low income, high income, high-frequent, low-frequent etc.) -> Maybe weighted clustering would be a solution.
$endgroup$
– Remus Raphael
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set the
min_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size$endgroup$
– mork
17 hours ago
$begingroup$
I see, in that case then sample all N clusters and randomly split it between the 2 vendors. Then, track measure of success on your side (which provider's ad was pushed and was successful). You can set the
min_cluster_size
threshold to the size of those diverse groups your interested with, and then sampling all clusters should do the job for you (auto weighted for you). min_cluster_size$endgroup$
– mork
17 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
$begingroup$
Thank you for you suggestions!
$endgroup$
– Remus Raphael
16 hours ago
|
show 1 more comment
Remus Raphael is a new contributor. Be nice, and check out our Code of Conduct.
Remus Raphael is a new contributor. Be nice, and check out our Code of Conduct.
Remus Raphael is a new contributor. Be nice, and check out our Code of Conduct.
Remus Raphael is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f44317%2fhow-to-select-random-data-for-two-different-recommender-systems%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown