Linux Disk Write-Speed varies with dd command












0















Why there is so much of disk-write difference while checking with dd command without bs and with bs



dd if=/dev/zero of=/tmp/test.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 289.564 s, 177 MB/s


dd if=/dev/zero of=/tmp/test1.log bs=1G count=50 oflag=dsync



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 150.427 s, 357 MB/s


dd if=/dev/zero of=/tmp/test2.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 288.614 s, 177 MB/s


dd if=/dev/zero of=/tmp/test3.log bs=1G count=50 oflag=direct



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 109.774 s, 489 MB/s


I goggled through-out but did not get concrete example however there is good article here which has few good caveates.










share|improve this question


















  • 2





    What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

    – Xen2050
    Jan 21 at 8:25













  • @Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

    – pygo
    Jan 21 at 9:25


















0















Why there is so much of disk-write difference while checking with dd command without bs and with bs



dd if=/dev/zero of=/tmp/test.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 289.564 s, 177 MB/s


dd if=/dev/zero of=/tmp/test1.log bs=1G count=50 oflag=dsync



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 150.427 s, 357 MB/s


dd if=/dev/zero of=/tmp/test2.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 288.614 s, 177 MB/s


dd if=/dev/zero of=/tmp/test3.log bs=1G count=50 oflag=direct



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 109.774 s, 489 MB/s


I goggled through-out but did not get concrete example however there is good article here which has few good caveates.










share|improve this question


















  • 2





    What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

    – Xen2050
    Jan 21 at 8:25













  • @Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

    – pygo
    Jan 21 at 9:25
















0












0








0








Why there is so much of disk-write difference while checking with dd command without bs and with bs



dd if=/dev/zero of=/tmp/test.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 289.564 s, 177 MB/s


dd if=/dev/zero of=/tmp/test1.log bs=1G count=50 oflag=dsync



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 150.427 s, 357 MB/s


dd if=/dev/zero of=/tmp/test2.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 288.614 s, 177 MB/s


dd if=/dev/zero of=/tmp/test3.log bs=1G count=50 oflag=direct



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 109.774 s, 489 MB/s


I goggled through-out but did not get concrete example however there is good article here which has few good caveates.










share|improve this question














Why there is so much of disk-write difference while checking with dd command without bs and with bs



dd if=/dev/zero of=/tmp/test.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 289.564 s, 177 MB/s


dd if=/dev/zero of=/tmp/test1.log bs=1G count=50 oflag=dsync



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 150.427 s, 357 MB/s


dd if=/dev/zero of=/tmp/test2.log count=100000000



100000000+0 records in
100000000+0 records out
51200000000 bytes (51 GB) copied, 288.614 s, 177 MB/s


dd if=/dev/zero of=/tmp/test3.log bs=1G count=50 oflag=direct



50+0 records in
50+0 records out
53687091200 bytes (54 GB) copied, 109.774 s, 489 MB/s


I goggled through-out but did not get concrete example however there is good article here which has few good caveates.







linux hard-drive dd






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Jan 21 at 8:18









pygopygo

1033




1033








  • 2





    What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

    – Xen2050
    Jan 21 at 8:25













  • @Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

    – pygo
    Jan 21 at 9:25
















  • 2





    What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

    – Xen2050
    Jan 21 at 8:25













  • @Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

    – pygo
    Jan 21 at 9:25










2




2





What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

– Xen2050
Jan 21 at 8:25







What's the filesystem on /tmp? Is it nearly full/empty? Spinning hard drive, USB flash, SSD? Did you clear the disk cache before each test? Absolutely no other programs could be reading or writing to the same drive, or any drive?

– Xen2050
Jan 21 at 8:25















@Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

– pygo
Jan 21 at 9:25







@Xen2050, sorry i was away, Its Disk based allocated File System nearly having 1TB of Space out of which 700GB is Free.

– pygo
Jan 21 at 9:25












1 Answer
1






active

oldest

votes


















3














Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that




  • for every 512 bytes of payload you incur the overhead of a request.

  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.


Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.






share|improve this answer


























  • This is really nice anwer and substantial background on the actual problem.

    – pygo
    Jan 21 at 9:26











Your Answer








StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "3"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1396514%2flinux-disk-write-speed-varies-with-dd-command%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









3














Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that




  • for every 512 bytes of payload you incur the overhead of a request.

  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.


Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.






share|improve this answer


























  • This is really nice anwer and substantial background on the actual problem.

    – pygo
    Jan 21 at 9:26
















3














Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that




  • for every 512 bytes of payload you incur the overhead of a request.

  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.


Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.






share|improve this answer


























  • This is really nice anwer and substantial background on the actual problem.

    – pygo
    Jan 21 at 9:26














3












3








3







Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that




  • for every 512 bytes of payload you incur the overhead of a request.

  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.


Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.






share|improve this answer















Without the bs parameter dd will use the standard block size of the device, very often 512 bytes. This means, that




  • for every 512 bytes of payload you incur the overhead of a request.

  • If the blocksize of 512 is not the optimal block size for your device (e.g. 4K sectors with 512 emulation or SSDs) you drive the devices far from their optimal working point.


Depending on your hardware, it might be possible to get even better numbers with a smaller bs as it will fit in the devices cache. E.g. for a RAID controller with 1GB cache, you might want to try a 10MB blocksize.







share|improve this answer














share|improve this answer



share|improve this answer








edited Jan 21 at 8:56

























answered Jan 21 at 8:28









Eugen RieckEugen Rieck

10.9k22429




10.9k22429













  • This is really nice anwer and substantial background on the actual problem.

    – pygo
    Jan 21 at 9:26



















  • This is really nice anwer and substantial background on the actual problem.

    – pygo
    Jan 21 at 9:26

















This is really nice anwer and substantial background on the actual problem.

– pygo
Jan 21 at 9:26





This is really nice anwer and substantial background on the actual problem.

– pygo
Jan 21 at 9:26


















draft saved

draft discarded




















































Thanks for contributing an answer to Super User!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1396514%2flinux-disk-write-speed-varies-with-dd-command%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Plaza Victoria

Puebla de Zaragoza

Musa