Are there any good general techniques for binning/histogramming arbitrary data?












1












$begingroup$


Let's say we want to histogram a finite set of measurements of some quantity. It is straight forward to calculate the usual statistical quantities for our sample such as the mean and the variance. Let's assume we can clean up our data by identifying outliers and moving them into underflow and overflow bins and thus, define more or less optimal min and max values for the plotting range. But how would one decide on the number and the size of bins? I would like to know if there are methods to find the optimal binning for the cases with fixed and variable bin sizes.










share|improve this question









$endgroup$












  • $begingroup$
    I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
    $endgroup$
    – Spacedman
    Jun 18 '16 at 17:04
















1












$begingroup$


Let's say we want to histogram a finite set of measurements of some quantity. It is straight forward to calculate the usual statistical quantities for our sample such as the mean and the variance. Let's assume we can clean up our data by identifying outliers and moving them into underflow and overflow bins and thus, define more or less optimal min and max values for the plotting range. But how would one decide on the number and the size of bins? I would like to know if there are methods to find the optimal binning for the cases with fixed and variable bin sizes.










share|improve this question









$endgroup$












  • $begingroup$
    I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
    $endgroup$
    – Spacedman
    Jun 18 '16 at 17:04














1












1








1





$begingroup$


Let's say we want to histogram a finite set of measurements of some quantity. It is straight forward to calculate the usual statistical quantities for our sample such as the mean and the variance. Let's assume we can clean up our data by identifying outliers and moving them into underflow and overflow bins and thus, define more or less optimal min and max values for the plotting range. But how would one decide on the number and the size of bins? I would like to know if there are methods to find the optimal binning for the cases with fixed and variable bin sizes.










share|improve this question









$endgroup$




Let's say we want to histogram a finite set of measurements of some quantity. It is straight forward to calculate the usual statistical quantities for our sample such as the mean and the variance. Let's assume we can clean up our data by identifying outliers and moving them into underflow and overflow bins and thus, define more or less optimal min and max values for the plotting range. But how would one decide on the number and the size of bins? I would like to know if there are methods to find the optimal binning for the cases with fixed and variable bin sizes.







data-mining visualization






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jun 17 '16 at 7:12









plexoosplexoos

1093




1093












  • $begingroup$
    I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
    $endgroup$
    – Spacedman
    Jun 18 '16 at 17:04


















  • $begingroup$
    I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
    $endgroup$
    – Spacedman
    Jun 18 '16 at 17:04
















$begingroup$
I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
$endgroup$
– Spacedman
Jun 18 '16 at 17:04




$begingroup$
I'm voting to close this question as off-topic because its a stats question answered here: stats.stackexchange.com/questions/798/…
$endgroup$
– Spacedman
Jun 18 '16 at 17:04










1 Answer
1






active

oldest

votes


















2












$begingroup$

I don't know if this what you want but here is a way to calculate the number of bins.




  1. Count the number of data points in your dataset.

  2. Take the square root of the number of data points and round up to determine the initial number of bins required: $InitialNumberOfBins = sqrt{NumberOfDataPoints}$.

  3. Divide the specification tolerance $Max-Min value$ by initial number of bins: $FinalNumberOfBins = (Max-Min value) / InitialNumberOfBins$.






share|improve this answer











$endgroup$













  • $begingroup$
    (1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 13:22












  • $begingroup$
    you square root the number of data points i.e (√144)= 12 initial bins
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 13:36










  • $begingroup$
    Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 15:57










  • $begingroup$
    I don't remember saying to normalize the data.
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 21:10










  • $begingroup$
    Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
    $endgroup$
    – K3---rnc
    Jun 19 '16 at 17:05












Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f12246%2fare-there-any-good-general-techniques-for-binning-histogramming-arbitrary-data%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









2












$begingroup$

I don't know if this what you want but here is a way to calculate the number of bins.




  1. Count the number of data points in your dataset.

  2. Take the square root of the number of data points and round up to determine the initial number of bins required: $InitialNumberOfBins = sqrt{NumberOfDataPoints}$.

  3. Divide the specification tolerance $Max-Min value$ by initial number of bins: $FinalNumberOfBins = (Max-Min value) / InitialNumberOfBins$.






share|improve this answer











$endgroup$













  • $begingroup$
    (1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 13:22












  • $begingroup$
    you square root the number of data points i.e (√144)= 12 initial bins
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 13:36










  • $begingroup$
    Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 15:57










  • $begingroup$
    I don't remember saying to normalize the data.
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 21:10










  • $begingroup$
    Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
    $endgroup$
    – K3---rnc
    Jun 19 '16 at 17:05
















2












$begingroup$

I don't know if this what you want but here is a way to calculate the number of bins.




  1. Count the number of data points in your dataset.

  2. Take the square root of the number of data points and round up to determine the initial number of bins required: $InitialNumberOfBins = sqrt{NumberOfDataPoints}$.

  3. Divide the specification tolerance $Max-Min value$ by initial number of bins: $FinalNumberOfBins = (Max-Min value) / InitialNumberOfBins$.






share|improve this answer











$endgroup$













  • $begingroup$
    (1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 13:22












  • $begingroup$
    you square root the number of data points i.e (√144)= 12 initial bins
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 13:36










  • $begingroup$
    Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 15:57










  • $begingroup$
    I don't remember saying to normalize the data.
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 21:10










  • $begingroup$
    Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
    $endgroup$
    – K3---rnc
    Jun 19 '16 at 17:05














2












2








2





$begingroup$

I don't know if this what you want but here is a way to calculate the number of bins.




  1. Count the number of data points in your dataset.

  2. Take the square root of the number of data points and round up to determine the initial number of bins required: $InitialNumberOfBins = sqrt{NumberOfDataPoints}$.

  3. Divide the specification tolerance $Max-Min value$ by initial number of bins: $FinalNumberOfBins = (Max-Min value) / InitialNumberOfBins$.






share|improve this answer











$endgroup$



I don't know if this what you want but here is a way to calculate the number of bins.




  1. Count the number of data points in your dataset.

  2. Take the square root of the number of data points and round up to determine the initial number of bins required: $InitialNumberOfBins = sqrt{NumberOfDataPoints}$.

  3. Divide the specification tolerance $Max-Min value$ by initial number of bins: $FinalNumberOfBins = (Max-Min value) / InitialNumberOfBins$.







share|improve this answer














share|improve this answer



share|improve this answer








edited 10 hours ago









Stephen Rauch

1,52551330




1,52551330










answered Jun 17 '16 at 7:37









Darrin ThomasDarrin Thomas

205312




205312












  • $begingroup$
    (1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 13:22












  • $begingroup$
    you square root the number of data points i.e (√144)= 12 initial bins
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 13:36










  • $begingroup$
    Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 15:57










  • $begingroup$
    I don't remember saying to normalize the data.
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 21:10










  • $begingroup$
    Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
    $endgroup$
    – K3---rnc
    Jun 19 '16 at 17:05


















  • $begingroup$
    (1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 13:22












  • $begingroup$
    you square root the number of data points i.e (√144)= 12 initial bins
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 13:36










  • $begingroup$
    Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
    $endgroup$
    – K3---rnc
    Jun 18 '16 at 15:57










  • $begingroup$
    I don't remember saying to normalize the data.
    $endgroup$
    – Darrin Thomas
    Jun 18 '16 at 21:10










  • $begingroup$
    Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
    $endgroup$
    – K3---rnc
    Jun 19 '16 at 17:05
















$begingroup$
(1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
$endgroup$
– K3---rnc
Jun 18 '16 at 13:22






$begingroup$
(1 - 0) / sqrt(10000) = 0.01 bins?? How does that work?
$endgroup$
– K3---rnc
Jun 18 '16 at 13:22














$begingroup$
you square root the number of data points i.e (√144)= 12 initial bins
$endgroup$
– Darrin Thomas
Jun 18 '16 at 13:36




$begingroup$
you square root the number of data points i.e (√144)= 12 initial bins
$endgroup$
– Darrin Thomas
Jun 18 '16 at 13:36












$begingroup$
Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
$endgroup$
– K3---rnc
Jun 18 '16 at 15:57




$begingroup$
Yes, 10k data points, normalized to [0, 1], squared, as above. Gives 0.01 bins.
$endgroup$
– K3---rnc
Jun 18 '16 at 15:57












$begingroup$
I don't remember saying to normalize the data.
$endgroup$
– Darrin Thomas
Jun 18 '16 at 21:10




$begingroup$
I don't remember saying to normalize the data.
$endgroup$
– Darrin Thomas
Jun 18 '16 at 21:10












$begingroup$
Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
$endgroup$
– K3---rnc
Jun 19 '16 at 17:05




$begingroup$
Sorry, it was already normalized. But I just used it as an example. The real data has values between 14.03 and 14.93, roughly normally distributed. So?
$endgroup$
– K3---rnc
Jun 19 '16 at 17:05


















draft saved

draft discarded




















































Thanks for contributing an answer to Data Science Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f12246%2fare-there-any-good-general-techniques-for-binning-histogramming-arbitrary-data%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

How to label and detect the document text images

Tabula Rosettana

Aureus (color)