Group data without losing information
$begingroup$
Context
Imagine that I have a dataset about sending messages. Each row as user_id
, a batch_id
, a is_open
field (boolean) and a is_clicked
field (boolean).
So one row means that one message was sent. It might have been open (is_open is true) or not (is_open is false). Same for clicked.
For this question, all corner use cases (what if a message is clicked without being opened?) are not relevant.
I want to graph open rate vs. click rate.
Question
How can I group these rows in a valid way, without discarding most of them?
Long version
The crux of my problem is that every single message has an open (and click) rate of exactly 0 or 100%.
I could first group messages per user, but then I would have to discard users having received less than at least 5 or 10 messages, to not have a peak a 0/20/40/60/80/100 %. This is a lot of data to drop, which is perfectly valid (and furthermore, I would like to compute things like median time to open, which does not lend itself well to multi-step calculation). It would take as well a while to get have enough historical data.
I could group by batch. But I could have for instance one batch per month, of 500k users. After a year, I would only have 12 points on my graph, whereas I already sent 6M messages.
My naive idea would be to just take rows by bunches of eg. 1000, and compute the open and click rate for this random bunch. It does not seem intellectually correct to me.
The actual language/implementation does not matter. I want to understand how to do this, actually doing it will come later.
bigdata preprocessing
New contributor
$endgroup$
add a comment |
$begingroup$
Context
Imagine that I have a dataset about sending messages. Each row as user_id
, a batch_id
, a is_open
field (boolean) and a is_clicked
field (boolean).
So one row means that one message was sent. It might have been open (is_open is true) or not (is_open is false). Same for clicked.
For this question, all corner use cases (what if a message is clicked without being opened?) are not relevant.
I want to graph open rate vs. click rate.
Question
How can I group these rows in a valid way, without discarding most of them?
Long version
The crux of my problem is that every single message has an open (and click) rate of exactly 0 or 100%.
I could first group messages per user, but then I would have to discard users having received less than at least 5 or 10 messages, to not have a peak a 0/20/40/60/80/100 %. This is a lot of data to drop, which is perfectly valid (and furthermore, I would like to compute things like median time to open, which does not lend itself well to multi-step calculation). It would take as well a while to get have enough historical data.
I could group by batch. But I could have for instance one batch per month, of 500k users. After a year, I would only have 12 points on my graph, whereas I already sent 6M messages.
My naive idea would be to just take rows by bunches of eg. 1000, and compute the open and click rate for this random bunch. It does not seem intellectually correct to me.
The actual language/implementation does not matter. I want to understand how to do this, actually doing it will come later.
bigdata preprocessing
New contributor
$endgroup$
add a comment |
$begingroup$
Context
Imagine that I have a dataset about sending messages. Each row as user_id
, a batch_id
, a is_open
field (boolean) and a is_clicked
field (boolean).
So one row means that one message was sent. It might have been open (is_open is true) or not (is_open is false). Same for clicked.
For this question, all corner use cases (what if a message is clicked without being opened?) are not relevant.
I want to graph open rate vs. click rate.
Question
How can I group these rows in a valid way, without discarding most of them?
Long version
The crux of my problem is that every single message has an open (and click) rate of exactly 0 or 100%.
I could first group messages per user, but then I would have to discard users having received less than at least 5 or 10 messages, to not have a peak a 0/20/40/60/80/100 %. This is a lot of data to drop, which is perfectly valid (and furthermore, I would like to compute things like median time to open, which does not lend itself well to multi-step calculation). It would take as well a while to get have enough historical data.
I could group by batch. But I could have for instance one batch per month, of 500k users. After a year, I would only have 12 points on my graph, whereas I already sent 6M messages.
My naive idea would be to just take rows by bunches of eg. 1000, and compute the open and click rate for this random bunch. It does not seem intellectually correct to me.
The actual language/implementation does not matter. I want to understand how to do this, actually doing it will come later.
bigdata preprocessing
New contributor
$endgroup$
Context
Imagine that I have a dataset about sending messages. Each row as user_id
, a batch_id
, a is_open
field (boolean) and a is_clicked
field (boolean).
So one row means that one message was sent. It might have been open (is_open is true) or not (is_open is false). Same for clicked.
For this question, all corner use cases (what if a message is clicked without being opened?) are not relevant.
I want to graph open rate vs. click rate.
Question
How can I group these rows in a valid way, without discarding most of them?
Long version
The crux of my problem is that every single message has an open (and click) rate of exactly 0 or 100%.
I could first group messages per user, but then I would have to discard users having received less than at least 5 or 10 messages, to not have a peak a 0/20/40/60/80/100 %. This is a lot of data to drop, which is perfectly valid (and furthermore, I would like to compute things like median time to open, which does not lend itself well to multi-step calculation). It would take as well a while to get have enough historical data.
I could group by batch. But I could have for instance one batch per month, of 500k users. After a year, I would only have 12 points on my graph, whereas I already sent 6M messages.
My naive idea would be to just take rows by bunches of eg. 1000, and compute the open and click rate for this random bunch. It does not seem intellectually correct to me.
The actual language/implementation does not matter. I want to understand how to do this, actually doing it will come later.
bigdata preprocessing
bigdata preprocessing
New contributor
New contributor
edited 7 mins ago
Guillaume
New contributor
asked 16 hours ago
GuillaumeGuillaume
1063
1063
New contributor
New contributor
add a comment |
add a comment |
0
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Guillaume is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47943%2fgroup-data-without-losing-information%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
0
active
oldest
votes
0
active
oldest
votes
active
oldest
votes
active
oldest
votes
Guillaume is a new contributor. Be nice, and check out our Code of Conduct.
Guillaume is a new contributor. Be nice, and check out our Code of Conduct.
Guillaume is a new contributor. Be nice, and check out our Code of Conduct.
Guillaume is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47943%2fgroup-data-without-losing-information%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown