Detect sensitive data from unstructured text documents
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
add a comment |
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
add a comment |
$begingroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
$endgroup$
I know this question is broad, but I need an advice to know if it's possible to achieve what I want to do.
The problem is that I have around 2500 documents with sensitive data being replaced by four dots. I do not have the original documents, so I wonder if there is a way to build a model that can detect sensitive data from any new documents (with sensitive data not being removed) using the previous documents? I want to apply machine learning or deep learning approaches. And what I know is that the original data set with annotated sensitive data should be used for training, which I can't obtain.
I am new at this field so any advice would be very appropriated
machine-learning deep-learning nlp information-retrieval automatic-summarization
machine-learning deep-learning nlp information-retrieval automatic-summarization
New contributor
New contributor
New contributor
asked yesterday
user971961user971961
111
111
New contributor
New contributor
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47545%2fdetect-sensitive-data-from-unstructured-text-documents%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
add a comment |
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
add a comment |
$begingroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
$endgroup$
Welcome to the site! Assuming that I understand your problem correctly, I think you can achieve a working model.
If I was in your position I would:
- Obtain the cleanest data possible from the documents. For example, you don't state if the docs are already in simple text or if you need to do something like OCR or whatnot. Having the cleanest set possible will be key for this.
- Make sure you have a consistent marker for the sensitive data. You mention four dots - is that the case for ALL instances? If not, clean that data now
- You're going to need to do standard NLP cleansing stuff like removing punctuation but you may or may not want to keep stop words (this will be part of your model testing). Also, this is key, be 100% certain that the four dots are viewed as a single work in your tokenization process - you should be able to verify this prior to committing to your tokenization file.
- I would take all my documents and create 3 word ngrams. I would then separate out ngrams that contain sensitive data and not sensitive data. That, essentially, becomes your labeled dataset and you should label them accordingly.
- My base model would use all entries that contain sensitive data in the second position of the ngram (the middle of the three words). I would train a neural network on that and see what kind of results I achieve with that. NOTE that your four dots will not be an input, only the word previous and after will be your inputs. You could almost treat this as a binary classification model - the middle word is either sensitive or it's not.
- Future iterations of my model would maybe use a multi-classification approach with something like (1) No sensitive data (2) sensitive data in first position (3) sensitive data in second position and (3) sensitive data in third position and so on and so on.
- From there, you can play with variations on the size of the ngram since the immediate words may or may not actually have an effect on the predictions. There's no limit to how crazy you can get with this - you won't know until you start modeling.
Finally, your entire project becomes even more interesting when you go to the prediction phase with new data. You will do the same and break down your document into ngrams and create a prediction for each one and output the result. In other words, you will need to break down your document only to turn around and build it up again - that should be a fun script to write! Good luck with this, let us know how it turns out.
answered yesterday
I_Play_With_DataI_Play_With_Data
1,234532
1,234532
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
add a comment |
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
1
1
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
Thank you very much for your informative answer. Forgive me if I'am asking basic questions, what you suggested is that I approach the problem as classification task where given the first and third words the model should predict the probability that the middle word is sensitive ? Do have any suggestion in which neural network architecture to use? Also, I want to know if you think using models like LSTM would be possible?
$endgroup$
– user971961
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 In effect, yes. Train the model on two words combos that do and don't have a sensitive word in between.That would be your binary classifier and you can use that as a baseline model (and it may very well be all you need for your problem). From there, I would change my training set to a multi-classification problem where for string containing X grams it would tell you if there is a word there that precedes a sensitive word. That should be more than enough to get you started and get a working model up and running.
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
$begingroup$
@user971961 Yes, LSTM would absolutely be a possible model to use. They usually work well for NLP problems and I could see a path forward that would have you using an LSTM neural network to solve this. But, that shouldn't be your concern right now. Focus on getting the datasets you're going need and try a variety of models, not just LSTM
$endgroup$
– I_Play_With_Data
yesterday
add a comment |
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
user971961 is a new contributor. Be nice, and check out our Code of Conduct.
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f47545%2fdetect-sensitive-data-from-unstructured-text-documents%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown