Loading Data into DW: Direct Insert from PreProcessing vs PreProcess and then loading from CSV files
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
add a comment |
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
add a comment |
$begingroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
$endgroup$
I have a preprocessing script (google cloud function) that generates files (stored in Google Drive). I want to load those files in my DW (Big Query).
What are the pros and cons of:
1) Running the preprocessing script, generate the files and then loadthose files
vs
2) Loading data directly from the preprocessing script (skip the file generation, just do a direct insert in the DW from the preprocessing script)
?
I am interested in focusing the question not only in terms of technical stuff and costs, but also in terms of data processing methodology.
I think the question could lead to the dilemma loading online or more in a batch process.
I have added some conclusions of mine as answers.
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks!
bigdata preprocessing
bigdata preprocessing
edited 1 hour ago
Gabriel
asked Jan 11 at 22:16
GabrielGabriel
1065
1065
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["$", "$"], ["\\(","\\)"]]);
});
});
}, "mathjax-editing");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "557"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43873%2floading-data-into-dw-direct-insert-from-preprocessing-vs-preprocess-and-then-lo%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
add a comment |
$begingroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
$endgroup$
Loading the data directly from the preprocessing script to the DW means not storing the results of your processing on storage.
Not storing those results on storage may imply:
- Needing to do the processing again if for some reason the data is needed again, for a new DW or for a new research.
- Not saving the data as a data sink and so not contributing to a data lake architecture, where data sinks are just saved for future reusing on "unpredicted" situations.
From a technical perspective, some tradeoffs are mentioned on the Google BigQuery docs on Streaming data, from which the following could be highlighted:
- Consistency management, as for errors or duplicates.
- Waiting time until data is available for copy and export operations
Still, it would be great to have more comments about the technical perspective on when to use direct transfer and when to use a file for staging preprocessing results.
Thanks
answered 1 hour ago
GabrielGabriel
1065
1065
add a comment |
add a comment |
Thanks for contributing an answer to Data Science Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
Use MathJax to format equations. MathJax reference.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdatascience.stackexchange.com%2fquestions%2f43873%2floading-data-into-dw-direct-insert-from-preprocessing-vs-preprocess-and-then-lo%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown