Resource and useful tips on Transfer Learning in NLP
$begingroup$
I have a few label data for training and testing a DNN. Main purpose of my work is to train a model which can do a binary classification of text. And for this purpose, I have around 3000 label data and 60000 unlabeled data available to me. My data type is related to instructions (like- open the door[label-1], give me a cup of water[label-1], give me money[label-0] etc.) In this case, I heard that Transferring knowledge from other models will help me a lot. Can anyone give me some useful resource for transfer learning in NLP domain?
I already did a few experiments. I used GLoVE as a pretrained embeddings. Then test it with my label data. But got around 70% accuracy. Also tried with embedding built using my own data (63k) and then train the model. Got 75% accuracy on the test data. My model architecture is given below-
Q1: I have a quick question will it be referred to as Transfer learning if I use GLOVE embeddings in model?
Any kind of help is welcomed. Even someone has other ideas for building a model without using transfer learning is welcomed.
deep-learning nlp convnet word-embeddings transfer-learning
$endgroup$
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a few label data for training and testing a DNN. Main purpose of my work is to train a model which can do a binary classification of text. And for this purpose, I have around 3000 label data and 60000 unlabeled data available to me. My data type is related to instructions (like- open the door[label-1], give me a cup of water[label-1], give me money[label-0] etc.) In this case, I heard that Transferring knowledge from other models will help me a lot. Can anyone give me some useful resource for transfer learning in NLP domain?
I already did a few experiments. I used GLoVE as a pretrained embeddings. Then test it with my label data. But got around 70% accuracy. Also tried with embedding built using my own data (63k) and then train the model. Got 75% accuracy on the test data. My model architecture is given below-
Q1: I have a quick question will it be referred to as Transfer learning if I use GLOVE embeddings in model?
Any kind of help is welcomed. Even someone has other ideas for building a model without using transfer learning is welcomed.
deep-learning nlp convnet word-embeddings transfer-learning
$endgroup$
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
$begingroup$
I have a few label data for training and testing a DNN. Main purpose of my work is to train a model which can do a binary classification of text. And for this purpose, I have around 3000 label data and 60000 unlabeled data available to me. My data type is related to instructions (like- open the door[label-1], give me a cup of water[label-1], give me money[label-0] etc.) In this case, I heard that Transferring knowledge from other models will help me a lot. Can anyone give me some useful resource for transfer learning in NLP domain?
I already did a few experiments. I used GLoVE as a pretrained embeddings. Then test it with my label data. But got around 70% accuracy. Also tried with embedding built using my own data (63k) and then train the model. Got 75% accuracy on the test data. My model architecture is given below-
Q1: I have a quick question will it be referred to as Transfer learning if I use GLOVE embeddings in model?
Any kind of help is welcomed. Even someone has other ideas for building a model without using transfer learning is welcomed.
deep-learning nlp convnet word-embeddings transfer-learning
$endgroup$
I have a few label data for training and testing a DNN. Main purpose of my work is to train a model which can do a binary classification of text. And for this purpose, I have around 3000 label data and 60000 unlabeled data available to me. My data type is related to instructions (like- open the door[label-1], give me a cup of water[label-1], give me money[label-0] etc.) In this case, I heard that Transferring knowledge from other models will help me a lot. Can anyone give me some useful resource for transfer learning in NLP domain?
I already did a few experiments. I used GLoVE as a pretrained embeddings. Then test it with my label data. But got around 70% accuracy. Also tried with embedding built using my own data (63k) and then train the model. Got 75% accuracy on the test data. My model architecture is given below-
Q1: I have a quick question will it be referred to as Transfer learning if I use GLOVE embeddings in model?
Any kind of help is welcomed. Even someone has other ideas for building a model without using transfer learning is welcomed.
deep-learning nlp convnet word-embeddings transfer-learning
deep-learning nlp convnet word-embeddings transfer-learning
asked Aug 20 '18 at 5:12
faysalfaysal
61
61
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
bumped to the homepage by Community♦ 7 mins ago
This question has answers that may be good or bad; the system has marked it active so that they can be reviewed.
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
If you use pre-trained models on data that are distinct from the data they were originally trained on, it's transfer learning. Your two-class sentence corpus is distinct from the data that the GloVe embeddings were generated on, so this could be considered a form of transfer learning. This might be a helpful explainer for general ideas around pre-training (and why it's a worthy pursuit).
Recent work in the NLP transfer learning space that I'm aware of is ULMFiT by Howard and Ruder of fast.ai, here's the paper if you prefer that. OpenAI also has recent work extending the Transformer model with a unsupervised pre-training, task specific fine-tuning approach.
As for your task, I think it might be helpful to explore research around sentence classification rather than digging deeply into transfer learning. For your purposes, it seems that embeddings are a means to have a reasonable representation of your data rather than prove that Common Crawl (or some other dataset) extends to your corpus.
Hope that helps, good luck!
$endgroup$
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37167%2fresource-and-useful-tips-on-transfer-learning-in-nlp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
If you use pre-trained models on data that are distinct from the data they were originally trained on, it's transfer learning. Your two-class sentence corpus is distinct from the data that the GloVe embeddings were generated on, so this could be considered a form of transfer learning. This might be a helpful explainer for general ideas around pre-training (and why it's a worthy pursuit).
Recent work in the NLP transfer learning space that I'm aware of is ULMFiT by Howard and Ruder of fast.ai, here's the paper if you prefer that. OpenAI also has recent work extending the Transformer model with a unsupervised pre-training, task specific fine-tuning approach.
As for your task, I think it might be helpful to explore research around sentence classification rather than digging deeply into transfer learning. For your purposes, it seems that embeddings are a means to have a reasonable representation of your data rather than prove that Common Crawl (or some other dataset) extends to your corpus.
Hope that helps, good luck!
$endgroup$
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
add a comment |
$begingroup$
If you use pre-trained models on data that are distinct from the data they were originally trained on, it's transfer learning. Your two-class sentence corpus is distinct from the data that the GloVe embeddings were generated on, so this could be considered a form of transfer learning. This might be a helpful explainer for general ideas around pre-training (and why it's a worthy pursuit).
Recent work in the NLP transfer learning space that I'm aware of is ULMFiT by Howard and Ruder of fast.ai, here's the paper if you prefer that. OpenAI also has recent work extending the Transformer model with a unsupervised pre-training, task specific fine-tuning approach.
As for your task, I think it might be helpful to explore research around sentence classification rather than digging deeply into transfer learning. For your purposes, it seems that embeddings are a means to have a reasonable representation of your data rather than prove that Common Crawl (or some other dataset) extends to your corpus.
Hope that helps, good luck!
$endgroup$
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
add a comment |
$begingroup$
If you use pre-trained models on data that are distinct from the data they were originally trained on, it's transfer learning. Your two-class sentence corpus is distinct from the data that the GloVe embeddings were generated on, so this could be considered a form of transfer learning. This might be a helpful explainer for general ideas around pre-training (and why it's a worthy pursuit).
Recent work in the NLP transfer learning space that I'm aware of is ULMFiT by Howard and Ruder of fast.ai, here's the paper if you prefer that. OpenAI also has recent work extending the Transformer model with a unsupervised pre-training, task specific fine-tuning approach.
As for your task, I think it might be helpful to explore research around sentence classification rather than digging deeply into transfer learning. For your purposes, it seems that embeddings are a means to have a reasonable representation of your data rather than prove that Common Crawl (or some other dataset) extends to your corpus.
Hope that helps, good luck!
$endgroup$
If you use pre-trained models on data that are distinct from the data they were originally trained on, it's transfer learning. Your two-class sentence corpus is distinct from the data that the GloVe embeddings were generated on, so this could be considered a form of transfer learning. This might be a helpful explainer for general ideas around pre-training (and why it's a worthy pursuit).
Recent work in the NLP transfer learning space that I'm aware of is ULMFiT by Howard and Ruder of fast.ai, here's the paper if you prefer that. OpenAI also has recent work extending the Transformer model with a unsupervised pre-training, task specific fine-tuning approach.
As for your task, I think it might be helpful to explore research around sentence classification rather than digging deeply into transfer learning. For your purposes, it seems that embeddings are a means to have a reasonable representation of your data rather than prove that Common Crawl (or some other dataset) extends to your corpus.
Hope that helps, good luck!
answered Aug 20 '18 at 15:14
tm1212tm1212
46517
46517
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
add a comment |
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
Your comments are really helpful. Yes I am aware of fast.ai. I was also thinking of using sentence classification without using transfer learning. Also, getting around 87~90% accuracy without using TL. Do you think it is reasonable accuracy in case of training 673606 params with around 3k label data?
$endgroup$
– faysal
Aug 26 '18 at 20:33
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
$begingroup$
I don't know enough about the data, difficulty of the task, or the final application of what you're working on to have a well-founded answer for you.
$endgroup$
– tm1212
Aug 27 '18 at 15:10
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f37167%2fresource-and-useful-tips-on-transfer-learning-in-nlp%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown