dbcc cleantable batch size explanation
.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ margin-bottom:0;
}
I have a very large table with 500 mil rows and a Text column that I will be dropping.
In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.
I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
So I restored the db, set it to 100,000 and it took 4 hours
Actual Question:
Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?
On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
sql-server sql-server-2016 dbcc
add a comment |
I have a very large table with 500 mil rows and a Text column that I will be dropping.
In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.
I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
So I restored the db, set it to 100,000 and it took 4 hours
Actual Question:
Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?
On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
sql-server sql-server-2016 dbcc
add a comment |
I have a very large table with 500 mil rows and a Text column that I will be dropping.
In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.
I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
So I restored the db, set it to 100,000 and it took 4 hours
Actual Question:
Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?
On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
sql-server sql-server-2016 dbcc
I have a very large table with 500 mil rows and a Text column that I will be dropping.
In my Dev environment, I have dropped the column and began the reclaim process, but im not sure what the batch size on the “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 100000)” statement actually does.
I have tried setting it to 5, expecting it to check the first 5 rows and end. “DBCC CLEANTABLE (MyDb,'dbo.LargeTbl, 5)” and it took 28 hours.
So I restored the db, set it to 100,000 and it took 4 hours
Actual Question:
Does the batch size tell the dbcc cleantable how many rows to do at a time and continuously keep running 100K at a time till it goes thru all 500mil rows?
Or once I run the 100,000 do I have to run it again till I do all 500 mil rows?
On my second test, (running the 100K once) I was able to reclaim 30GB. Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
sql-server sql-server-2016 dbcc
sql-server sql-server-2016 dbcc
edited 15 hours ago
Paul White♦
54.1k14287460
54.1k14287460
asked 15 hours ago
TomaszTomasz
806
806
add a comment |
add a comment |
2 Answers
2
active
oldest
votes
In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.
You state
Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
The best practices in the Microsoft documents says:
DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.
It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.
As you are working on a Development server.
Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.
Note Rebuild and Reorganize are not the same thing:
- Reorganize and Rebuild Indexes (Microsoft)
- Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)
- SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
add a comment |
According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.
By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.
- No Batch size specified
- A batch size of 5
- A batch size of 100,00
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "182"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f234041%2fdbcc-cleantable-batch-size-explanation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
2 Answers
2
active
oldest
votes
2 Answers
2
active
oldest
votes
active
oldest
votes
active
oldest
votes
In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.
You state
Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
The best practices in the Microsoft documents says:
DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.
It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.
As you are working on a Development server.
Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.
Note Rebuild and Reorganize are not the same thing:
- Reorganize and Rebuild Indexes (Microsoft)
- Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)
- SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
add a comment |
In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.
You state
Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
The best practices in the Microsoft documents says:
DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.
It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.
As you are working on a Development server.
Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.
Note Rebuild and Reorganize are not the same thing:
- Reorganize and Rebuild Indexes (Microsoft)
- Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)
- SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
add a comment |
In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.
You state
Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
The best practices in the Microsoft documents says:
DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.
It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.
As you are working on a Development server.
Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.
Note Rebuild and Reorganize are not the same thing:
- Reorganize and Rebuild Indexes (Microsoft)
- Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)
- SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)
In addition to the great answer by armitage you probably do not need to use DBCC CLEANTABLE in your scenario.
You state
Then I ran an index reorg on ALL indexes and reclaimed and additional 60GB..
The best practices in the Microsoft documents says:
DBCC CLEANTABLE should not be executed as a routine maintenance task. Instead, use DBCC CLEANTABLE after you make significant changes to variable-length columns in a table or indexed view and you need to immediately reclaim the unused space. Alternatively, you can rebuild the indexes on the table or view; however, doing so is a more resource-intensive operation.
It seems like time and space are your biggest goals. Generally rebuilding an index is quicker (but more resource intensive) than a reorg.
As you are working on a Development server.
Just rebuild your indexes and you will get the benefits of the index reorg and the DBCC CLEANTABLE at the same time, and probably much quicker.
Note Rebuild and Reorganize are not the same thing:
- Reorganize and Rebuild Indexes (Microsoft)
- Rebuild or Reorganize: SQL Server Index Maintenance (Brent Ozar)
- SQLskills SQL101: REBUILD vs. REORGANIZE(Paul Randal)
edited 13 hours ago
answered 14 hours ago
James JenkinsJames Jenkins
2,04022045
2,04022045
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
add a comment |
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
i thought the same thing and ran the test in reverse. 1) dropped the column 2) defrag all indexes (only reclaimed 30GB) 3) ran cleantable and got 60gb... looks like i need both, this is a one time thing
– Tomasz
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
@Tomasz I edited my answer, not sure what you mean by 'defrag all indexes' but Reorg (what you said in your question) & Rebuild (what I said in this answer) are not the same thing.
– James Jenkins
14 hours ago
1
1
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
ah, sorry. i reorganized them each time. i will run one more test where i will drop the column and rebuild the index and share the results. thank you.
– Tomasz
13 hours ago
add a comment |
According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.
By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.
- No Batch size specified
- A batch size of 5
- A batch size of 100,00
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
add a comment |
According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.
By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.
- No Batch size specified
- A batch size of 5
- A batch size of 100,00
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
add a comment |
According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.
By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.
- No Batch size specified
- A batch size of 5
- A batch size of 100,00
According to the Microsoft documentation the Batch Size tells the DBCC CleanTable the number of rows to process per transaction. This relates to the number of rows that the DBCC CleanTable processes internally as the DBCC CleanTable process runs.
By taking the example in the documentation and modifying to add a million rows and then running the sample script multiple times with varying values for batch size ( see below) it appears that specifying a small batch size increase the execution time as DBCC CleanTable is only operating on the number of rows specified in the batch size.
- No Batch size specified
- A batch size of 5
- A batch size of 100,00
answered 15 hours ago
armitagearmitage
838512
838512
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
add a comment |
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
So just to confirm, the process will go thru the entire 500Mil rows, just "exclusively locking" 100K at a time and also allow for backup logs to occur.
– Tomasz
14 hours ago
add a comment |
Thanks for contributing an answer to Database Administrators Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fdba.stackexchange.com%2fquestions%2f234041%2fdbcc-cleantable-batch-size-explanation%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown