Why do SSD sectors have limited write endurance?











up vote
56
down vote

favorite
17












I often see people mention that SSD sectors have a limited number of writes before they go bad, especially when compared to classic (rotating disc) hard drives where most of those fail due to mechanical failure, not sectors going bad. I am curious as to why that is.



I am looking for a technical yet consumer-oriented explanation, i.e. the exact component that fails and why frequent writes affect the quality of that component, but explained in such a way that it does not require an extreme amount of knowledge about SSDs.










share|improve this question




















  • 1




    I believe this would be an intresting read for you: techreport.com/review/24841/…
    – MustSeeMelons
    Aug 1 '16 at 12:07






  • 3




    See also electronics.stackexchange.com/questions/48395/…
    – pjc50
    Aug 1 '16 at 12:47






  • 4




    This rests on the precept that there are things you can use forever and never wear down
    – random
    Aug 2 '16 at 13:00






  • 3




    superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
    – random
    Aug 2 '16 at 13:35






  • 1




    Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
    – helena4
    Aug 3 '16 at 9:02















up vote
56
down vote

favorite
17












I often see people mention that SSD sectors have a limited number of writes before they go bad, especially when compared to classic (rotating disc) hard drives where most of those fail due to mechanical failure, not sectors going bad. I am curious as to why that is.



I am looking for a technical yet consumer-oriented explanation, i.e. the exact component that fails and why frequent writes affect the quality of that component, but explained in such a way that it does not require an extreme amount of knowledge about SSDs.










share|improve this question




















  • 1




    I believe this would be an intresting read for you: techreport.com/review/24841/…
    – MustSeeMelons
    Aug 1 '16 at 12:07






  • 3




    See also electronics.stackexchange.com/questions/48395/…
    – pjc50
    Aug 1 '16 at 12:47






  • 4




    This rests on the precept that there are things you can use forever and never wear down
    – random
    Aug 2 '16 at 13:00






  • 3




    superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
    – random
    Aug 2 '16 at 13:35






  • 1




    Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
    – helena4
    Aug 3 '16 at 9:02













up vote
56
down vote

favorite
17









up vote
56
down vote

favorite
17






17





I often see people mention that SSD sectors have a limited number of writes before they go bad, especially when compared to classic (rotating disc) hard drives where most of those fail due to mechanical failure, not sectors going bad. I am curious as to why that is.



I am looking for a technical yet consumer-oriented explanation, i.e. the exact component that fails and why frequent writes affect the quality of that component, but explained in such a way that it does not require an extreme amount of knowledge about SSDs.










share|improve this question















I often see people mention that SSD sectors have a limited number of writes before they go bad, especially when compared to classic (rotating disc) hard drives where most of those fail due to mechanical failure, not sectors going bad. I am curious as to why that is.



I am looking for a technical yet consumer-oriented explanation, i.e. the exact component that fails and why frequent writes affect the quality of that component, but explained in such a way that it does not require an extreme amount of knowledge about SSDs.







ssd






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Dec 15 '16 at 20:06









bwDraco

36.5k36135177




36.5k36135177










asked Aug 1 '16 at 9:36









Nzall

1,68652239




1,68652239








  • 1




    I believe this would be an intresting read for you: techreport.com/review/24841/…
    – MustSeeMelons
    Aug 1 '16 at 12:07






  • 3




    See also electronics.stackexchange.com/questions/48395/…
    – pjc50
    Aug 1 '16 at 12:47






  • 4




    This rests on the precept that there are things you can use forever and never wear down
    – random
    Aug 2 '16 at 13:00






  • 3




    superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
    – random
    Aug 2 '16 at 13:35






  • 1




    Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
    – helena4
    Aug 3 '16 at 9:02














  • 1




    I believe this would be an intresting read for you: techreport.com/review/24841/…
    – MustSeeMelons
    Aug 1 '16 at 12:07






  • 3




    See also electronics.stackexchange.com/questions/48395/…
    – pjc50
    Aug 1 '16 at 12:47






  • 4




    This rests on the precept that there are things you can use forever and never wear down
    – random
    Aug 2 '16 at 13:00






  • 3




    superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
    – random
    Aug 2 '16 at 13:35






  • 1




    Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
    – helena4
    Aug 3 '16 at 9:02








1




1




I believe this would be an intresting read for you: techreport.com/review/24841/…
– MustSeeMelons
Aug 1 '16 at 12:07




I believe this would be an intresting read for you: techreport.com/review/24841/…
– MustSeeMelons
Aug 1 '16 at 12:07




3




3




See also electronics.stackexchange.com/questions/48395/…
– pjc50
Aug 1 '16 at 12:47




See also electronics.stackexchange.com/questions/48395/…
– pjc50
Aug 1 '16 at 12:47




4




4




This rests on the precept that there are things you can use forever and never wear down
– random
Aug 2 '16 at 13:00




This rests on the precept that there are things you can use forever and never wear down
– random
Aug 2 '16 at 13:00




3




3




superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
– random
Aug 2 '16 at 13:35




superuser.com/questions/215463/… superuser.com/questions/31324/… superuser.com/questions/410166/…
– random
Aug 2 '16 at 13:35




1




1




Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
– helena4
Aug 3 '16 at 9:02




Don't forget the current economy. While physical degradation is a fact. It is most certainly a fact very often defined at the blueprint stage with major factors such as cost and planned obsolescence.
– helena4
Aug 3 '16 at 9:02










7 Answers
7






active

oldest

votes

















up vote
82
down vote



accepted










Copied from "Why Flash Wears Out and How to Make it Last Longer
":




NAND flash stores the information by controlling the amount of
electrons in a region called a “floating gate”. These electrons change
the conductive properties of the memory cell (the gate voltage needed
to turn the cell on and off), which in turn is used to store one or
more bits of data in the cell. This is why the ability of the floating
gate to hold a charge is critical to the cell’s ability to reliably
store data.



Write and Erase Processes Cause Wear



When written to and erased during the normal course of use, the oxide
layer separating the floating gate from the substrate degrades,
reducing its ability to hold a charge for an extended period of time.
Each solid-state storage device can sustain a finite amount of
degradation before it becomes unreliable, meaning it may still
function but not consistently. The number of writes and erasures (P/E
cycles) a NAND device can sustain while still maintaining a
consistent, predictable output, defines its endurance.







share|improve this answer



















  • 8




    The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
    – JDługosz
    Aug 1 '16 at 16:16






  • 1




    @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
    – Ben Voigt
    Aug 1 '16 at 22:27






  • 4




    The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
    – JDługosz
    Aug 2 '16 at 1:15






  • 2




    Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
    – Jules
    Aug 2 '16 at 7:44






  • 10




    At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
    – nigel222
    Aug 2 '16 at 16:54


















up vote
64
down vote













Imagine a piece of regular paper and pencil. Now feel free to write and erase as many times as you please in one spot on the paper. How long does it take before you make it through the paper?



SSDs and USB flash drives have this basic concept but at the electron level.






share|improve this answer

















  • 35




    I like the analogy, but this answer could use some facts to explain what is actually happening.
    – GolezTrol
    Aug 1 '16 at 21:07






  • 11




    It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
    – Ben Voigt
    Aug 1 '16 at 22:31






  • 28




    @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
    – Bob
    Aug 2 '16 at 4:38






  • 8




    OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
    – David Richerby
    Aug 3 '16 at 0:30








  • 5




    @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
    – David Richerby
    Aug 3 '16 at 10:25


















up vote
25
down vote













The problem is that the NAND flash substrate used suffers degradation on each erase. The erase process involves hitting the flash cell with a relatively large charge of electrical energy, this causes the semiconductor layer on the chip itself to degrade slightly.



This damage on the long run, increase bit-error rates that can be corrected with software, but eventually the error correction code routines in the flash controller can't keep up with these errors and the flash cell becomes unreliable.






share|improve this answer



















  • 1




    The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
    – JDługosz
    Aug 1 '16 at 16:16










  • @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
    – Jules
    Aug 2 '16 at 7:46










  • It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
    – Toby Speight
    Aug 2 '16 at 10:07










  • Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
    – Toby Speight
    Aug 2 '16 at 10:09




















up vote
11
down vote













My answer is taken from people with more knowledge than me!



SSDs use what is called flash memory. A physical process occurs when data is written to a cell (electrons move in and out.) When this happens it erodes the physical structure. This process is pretty much like water erosion; eventually it's too much and the wall gives way. When this happens the cell is rendered useless.



Another way is that these electrons can get "stuck," making it harder for the cell to be read correctly. The analogy for this is a lot of people talking at the same time, and it's hard to hear anyone. You may pick out one voice, but it may be the wrong one!



SSDs try to spread the load evenly between its in use cells so that they wear down evenly. Eventually a cell will die and be marked as unavailable. SSDs have an area of "overprovisioned cells," i.e. spare cells (think substitutes in sport). When a cell dies, one of these are used instead. Eventually all these extra cells are used as well and the SSD will slowly become unreadable.



Hopefully that was a consumer friendly answer!



Edit: Source Here






share|improve this answer






























    up vote
    10
    down vote













    Nearly all consumer SSDs use a memory technology called NAND flash memory. The write endurance limit is due to the way flash memory works.



    Put simply, flash memory operates by storing electrons inside an insulating barrier. Reading a flash memory cell involves checking its charge level, so to retain stored data, the electron charge must remain stable over time. To increase storage density and reduce cost, most SSDs use flash memory that distinguishes between not just two possible charge levels (one bit per cell, SLC), but four (two bits per cell, MLC), eight (three bits per cell, TLC), or even 16 (four bits per cell, TLC).



    Writing to flash memory requires driving an elevated voltage to move electrons through the insulator, a process which gradually wears it down. As the insulation wears down, the cell is less able to keep its electron charge stable, eventually causing the cell to fail to retain data. With TLC and particularly QLC NAND, the cells are particularly sensitive to this charge drifting due to the need to distinguish among more levels to store multiple bits of data.



    To further increase storage density and reduce cost, the process used to manufacture flash memory has been scaled down dramatically, to as small as 15nm today—and smaller cells wear down faster. For planar NAND flash (not 3D NAND), this means that while SLC NAND can last tens or even hundreds of thousands of write cycles, MLC NAND is typically good for only about 3,000 cycles and TLC a mere 750 to 1,500 cycles.



    3D NAND, which stacks NAND cells one on top of another, can achieve higher storage density without having to shrink the cells as small, which enables higher write endurance. While Samsung has gone back to a 40nm process for its 3D NAND, other flash memory manufacturers such as Micron have decided to use small processes anyway (though not quite as small as planar NAND) to deliver maximum storage density and minimum cost. Typical endurance ratings for 3D TLC NAND are about 2,000 to 3,000 cycles, but can be higher in enterprise-class devices. 3D QLC NAND is typically rated for about 1,000 cycles.



    An emerging memory technology called 3D XPoint, developed by Intel and Micron, uses a completely different approach to storing data which is not subject to the endurance limitations of flash memory. 3D XPoint is also vastly faster than flash memory, fast enough to potentially replace DRAM as system memory. Intel will sell devices using 3D XPoint technology under the Optane brand, while Micron will market 3D XPoint devices under the QuantX brand. Consumer SSDs with this technology may hit the market as soon as 2017, although it is my belief that for cost reasons, 3D NAND (primarily of the TLC variety) will be the dominant form of mass storage for the next several years.






    share|improve this answer






























      up vote
      5
      down vote













      A flash cell stores static electricity. It's exactly the same kind of charge that you can store on an inflated balloon: you place a few extra electrons on it.



      What's special about static electricity is that it stays in place. Normally in electronics, everything is connected to everything else in some way with conductors, and even if there's a large resistor between a balloon and ground then the charge will vanish pretty quickly. The reason that a balloon stays charged is that air is actually an insulator: it has infinite resistivity.



      Normally, that is. Since all matter consists of electrons and atom rumps, you can make anything a conductor: just apply enough energy, and some of the electrons will shake loose and be (for a short while) free to move closer to the balloon, or further from it. This actually happens in air with static electricity: we know this process as lightning!



      I don't have to emphasise that lightning is a rather violent process. These electrons are a crucial part of the chemical structure of matter. In the case of air, lightning leaves a bit of the oxygen and nitrogen transformed to ozone and nitrogen dioxide. Only because the air keeps moving and mingling and those substances eventually react back to oxygen and nitrogen is the no “persistent harm” done, and the air is still an insulator.



      Not so in case of a flash cell: here, the insulator must be way more compact. This is only feasible with solid-state oxide layers. Sturdy stuff, but it too isn't impervious to the effects of forcing some charge through the conductive material. And that's what eventually wrecks a flash cell, if you change its state too often.



      By contrast, a DRAM cell doesn't have proper insulators in it. That's why it needs to be periodically refreshed, many times a second, to not lose information; however, because it's all just ordinary conductive charge transports, nothing much bad usually happens if you change the state of a RAM cell. Therefore, RAM endures many more read/write cycles than flash does.





      Or, for a positive charge, you remove some electrons from the molecule bonds. You need to take so few that this doesn't affect the chemical structure in a detectable way.



      These static charges are actually tiny. Even the smallest watch battery that lasts for years supplies enough charge every second to charge hundreds of balloons! It just doesn't have nearly enough voltage to punch through any noteworthy potential barrier.



      At least, all matter on earth... let's not complicate things by going to neutron stars.






      share|improve this answer




























        up vote
        1
        down vote













        Less technical, and an answer to what I believe OP means by "I often see people mention that SSDs have a limited amount of writes in their sectors before they go bad, especially compared to classic rotating disk hard drives, where most drives fail due to mechanical failure, not sectors going bad."

        I'll interpret the OP question as, "Since SSDs fail far more often than spinning rust, how can using one give a reasonable reliability?"



        There are two types of reliability and failure. One is the thing fails completely due to age, quality, abuse, etc. Or, it may have a sector error due to lots of read/write.



        Sector errors happen on all media. The drive controller (SSD or spinning) will re-map a failing sector data to a new sector. If it has failed completely, then it may still remap, but the data is lost. In SSD the sector is large and often fails completely.



        SSDs can have one or both types of reliability. Read/write cycle issues can be helped with

        having a larger drive. If you have a small drive and use it for OS like Windows, then it will get a lot of read/write cycles. The same OS on a much, much larger capacity drive will have fewer cycles. So, even a drive with "only" a few thousand cycles might not be a problem if each sector isn't erased frequently.

        Balancing data - SSDs will move data from frequently used sectors to less frequently used ones. Think about the OS again, and updates, vs. a photo you took and just want to keep. At some point the SSD might swap the physical locations of the photo and an OS file to balance out the cycles.

        Compression - compressing data takes less space, thus less writing.



        Then there is quality of components. Getting the cheapest SSD or USB you can find might work for a while, but a quality one made for enterprise use will last a lot longer time, not just in erase cycles but in total use.



        As drives get larger and larger (like 100-1000GB) then erase cycles become less of an issue even though they can sustain less writes. Some drives will use DRAM as a cache to help lower write cycles. Some will use a high-quality segment of the SSD for cache and lower quality for low cost and large size.



        Modern good-quality consumer SSDs can last a good long time in a consumer machine. I have some 5+ years old that still work. I also have a couple of cheap, new ones that failed after a few months. Sometimes it is just (bad) luck.






        share|improve this answer





















        • A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
          – fixer1234
          Aug 4 '16 at 21:31










        • In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
          – fixer1234
          Aug 4 '16 at 21:32













        Your Answer








        StackExchange.ready(function() {
        var channelOptions = {
        tags: "".split(" "),
        id: "3"
        };
        initTagRenderer("".split(" "), "".split(" "), channelOptions);

        StackExchange.using("externalEditor", function() {
        // Have to fire editor after snippets, if snippets enabled
        if (StackExchange.settings.snippets.snippetsEnabled) {
        StackExchange.using("snippets", function() {
        createEditor();
        });
        }
        else {
        createEditor();
        }
        });

        function createEditor() {
        StackExchange.prepareEditor({
        heartbeatType: 'answer',
        convertImagesToLinks: true,
        noModals: true,
        showLowRepImageUploadWarning: true,
        reputationToPostImages: 10,
        bindNavPrevention: true,
        postfix: "",
        imageUploader: {
        brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
        contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
        allowUrls: true
        },
        onDemand: true,
        discardSelector: ".discard-answer"
        ,immediatelyShowMarkdownHelp:true
        });


        }
        });














        draft saved

        draft discarded


















        StackExchange.ready(
        function () {
        StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1107320%2fwhy-do-ssd-sectors-have-limited-write-endurance%23new-answer', 'question_page');
        }
        );

        Post as a guest















        Required, but never shown

























        7 Answers
        7






        active

        oldest

        votes








        7 Answers
        7






        active

        oldest

        votes









        active

        oldest

        votes






        active

        oldest

        votes








        up vote
        82
        down vote



        accepted










        Copied from "Why Flash Wears Out and How to Make it Last Longer
        ":




        NAND flash stores the information by controlling the amount of
        electrons in a region called a “floating gate”. These electrons change
        the conductive properties of the memory cell (the gate voltage needed
        to turn the cell on and off), which in turn is used to store one or
        more bits of data in the cell. This is why the ability of the floating
        gate to hold a charge is critical to the cell’s ability to reliably
        store data.



        Write and Erase Processes Cause Wear



        When written to and erased during the normal course of use, the oxide
        layer separating the floating gate from the substrate degrades,
        reducing its ability to hold a charge for an extended period of time.
        Each solid-state storage device can sustain a finite amount of
        degradation before it becomes unreliable, meaning it may still
        function but not consistently. The number of writes and erasures (P/E
        cycles) a NAND device can sustain while still maintaining a
        consistent, predictable output, defines its endurance.







        share|improve this answer



















        • 8




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16






        • 1




          @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
          – Ben Voigt
          Aug 1 '16 at 22:27






        • 4




          The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
          – JDługosz
          Aug 2 '16 at 1:15






        • 2




          Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
          – Jules
          Aug 2 '16 at 7:44






        • 10




          At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
          – nigel222
          Aug 2 '16 at 16:54















        up vote
        82
        down vote



        accepted










        Copied from "Why Flash Wears Out and How to Make it Last Longer
        ":




        NAND flash stores the information by controlling the amount of
        electrons in a region called a “floating gate”. These electrons change
        the conductive properties of the memory cell (the gate voltage needed
        to turn the cell on and off), which in turn is used to store one or
        more bits of data in the cell. This is why the ability of the floating
        gate to hold a charge is critical to the cell’s ability to reliably
        store data.



        Write and Erase Processes Cause Wear



        When written to and erased during the normal course of use, the oxide
        layer separating the floating gate from the substrate degrades,
        reducing its ability to hold a charge for an extended period of time.
        Each solid-state storage device can sustain a finite amount of
        degradation before it becomes unreliable, meaning it may still
        function but not consistently. The number of writes and erasures (P/E
        cycles) a NAND device can sustain while still maintaining a
        consistent, predictable output, defines its endurance.







        share|improve this answer



















        • 8




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16






        • 1




          @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
          – Ben Voigt
          Aug 1 '16 at 22:27






        • 4




          The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
          – JDługosz
          Aug 2 '16 at 1:15






        • 2




          Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
          – Jules
          Aug 2 '16 at 7:44






        • 10




          At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
          – nigel222
          Aug 2 '16 at 16:54













        up vote
        82
        down vote



        accepted







        up vote
        82
        down vote



        accepted






        Copied from "Why Flash Wears Out and How to Make it Last Longer
        ":




        NAND flash stores the information by controlling the amount of
        electrons in a region called a “floating gate”. These electrons change
        the conductive properties of the memory cell (the gate voltage needed
        to turn the cell on and off), which in turn is used to store one or
        more bits of data in the cell. This is why the ability of the floating
        gate to hold a charge is critical to the cell’s ability to reliably
        store data.



        Write and Erase Processes Cause Wear



        When written to and erased during the normal course of use, the oxide
        layer separating the floating gate from the substrate degrades,
        reducing its ability to hold a charge for an extended period of time.
        Each solid-state storage device can sustain a finite amount of
        degradation before it becomes unreliable, meaning it may still
        function but not consistently. The number of writes and erasures (P/E
        cycles) a NAND device can sustain while still maintaining a
        consistent, predictable output, defines its endurance.







        share|improve this answer














        Copied from "Why Flash Wears Out and How to Make it Last Longer
        ":




        NAND flash stores the information by controlling the amount of
        electrons in a region called a “floating gate”. These electrons change
        the conductive properties of the memory cell (the gate voltage needed
        to turn the cell on and off), which in turn is used to store one or
        more bits of data in the cell. This is why the ability of the floating
        gate to hold a charge is critical to the cell’s ability to reliably
        store data.



        Write and Erase Processes Cause Wear



        When written to and erased during the normal course of use, the oxide
        layer separating the floating gate from the substrate degrades,
        reducing its ability to hold a charge for an extended period of time.
        Each solid-state storage device can sustain a finite amount of
        degradation before it becomes unreliable, meaning it may still
        function but not consistently. The number of writes and erasures (P/E
        cycles) a NAND device can sustain while still maintaining a
        consistent, predictable output, defines its endurance.








        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 3 '16 at 1:48









        Remy Lebeau

        24313




        24313










        answered Aug 1 '16 at 9:51









        Kinnectus

        8,82921730




        8,82921730








        • 8




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16






        • 1




          @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
          – Ben Voigt
          Aug 1 '16 at 22:27






        • 4




          The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
          – JDługosz
          Aug 2 '16 at 1:15






        • 2




          Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
          – Jules
          Aug 2 '16 at 7:44






        • 10




          At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
          – nigel222
          Aug 2 '16 at 16:54














        • 8




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16






        • 1




          @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
          – Ben Voigt
          Aug 1 '16 at 22:27






        • 4




          The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
          – JDługosz
          Aug 2 '16 at 1:15






        • 2




          Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
          – Jules
          Aug 2 '16 at 7:44






        • 10




          At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
          – nigel222
          Aug 2 '16 at 16:54








        8




        8




        The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
        – JDługosz
        Aug 1 '16 at 16:16




        The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
        – JDługosz
        Aug 1 '16 at 16:16




        1




        1




        @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
        – Ben Voigt
        Aug 1 '16 at 22:27




        @JDługosz: Flash memory in general has limited write cycles, but the actual mechanism causing the limitation varies with technology.
        – Ben Voigt
        Aug 1 '16 at 22:27




        4




        4




        The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
        – JDługosz
        Aug 2 '16 at 1:15




        The link I posted describes the NOR as being “floating gate” as well. It seems that the actual flash cell is the same, and NAND just refers to the way they are connected in series (thus resembling a NAND gate). The addressing logic and multiplexing details are irrelevant to the wear mechanics of the flash proper.
        – JDługosz
        Aug 2 '16 at 1:15




        2




        2




        Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
        – Jules
        Aug 2 '16 at 7:44




        Indeed -- all flash stores information as charge in a floating gate, that is basically the definition of what flash is; there are other kinds of Electronically Erasable Programmable Read Only Memory than flash, and they have different methods of degradation, but flash is defined as an EEPROM that stores information in a floating gate charge. NAND vs NOR defines the mechanism for how the data is read or written, not how it is stored.
        – Jules
        Aug 2 '16 at 7:44




        10




        10




        At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
        – nigel222
        Aug 2 '16 at 16:54




        At simplest, the physics is that you are forcing electrons through a (very thin) insulator by applying a high voltage. Occasionally this will cause bonds between atoms to break and re-form in different arrangements, which will degrade the insulation. Eventually the memory cell becomes leaky or shorts out and it can then no longer reliably store data. The wiki is interesting: en.wikipedia.org/wiki/Flash_memory#Memory_wear. It is possible to do an erase-and-repair cycle on a relatively large chunk of the chip by heating (annealing) it.
        – nigel222
        Aug 2 '16 at 16:54












        up vote
        64
        down vote













        Imagine a piece of regular paper and pencil. Now feel free to write and erase as many times as you please in one spot on the paper. How long does it take before you make it through the paper?



        SSDs and USB flash drives have this basic concept but at the electron level.






        share|improve this answer

















        • 35




          I like the analogy, but this answer could use some facts to explain what is actually happening.
          – GolezTrol
          Aug 1 '16 at 21:07






        • 11




          It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
          – Ben Voigt
          Aug 1 '16 at 22:31






        • 28




          @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
          – Bob
          Aug 2 '16 at 4:38






        • 8




          OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
          – David Richerby
          Aug 3 '16 at 0:30








        • 5




          @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
          – David Richerby
          Aug 3 '16 at 10:25















        up vote
        64
        down vote













        Imagine a piece of regular paper and pencil. Now feel free to write and erase as many times as you please in one spot on the paper. How long does it take before you make it through the paper?



        SSDs and USB flash drives have this basic concept but at the electron level.






        share|improve this answer

















        • 35




          I like the analogy, but this answer could use some facts to explain what is actually happening.
          – GolezTrol
          Aug 1 '16 at 21:07






        • 11




          It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
          – Ben Voigt
          Aug 1 '16 at 22:31






        • 28




          @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
          – Bob
          Aug 2 '16 at 4:38






        • 8




          OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
          – David Richerby
          Aug 3 '16 at 0:30








        • 5




          @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
          – David Richerby
          Aug 3 '16 at 10:25













        up vote
        64
        down vote










        up vote
        64
        down vote









        Imagine a piece of regular paper and pencil. Now feel free to write and erase as many times as you please in one spot on the paper. How long does it take before you make it through the paper?



        SSDs and USB flash drives have this basic concept but at the electron level.






        share|improve this answer












        Imagine a piece of regular paper and pencil. Now feel free to write and erase as many times as you please in one spot on the paper. How long does it take before you make it through the paper?



        SSDs and USB flash drives have this basic concept but at the electron level.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Aug 1 '16 at 13:16









        MonkeyZeus

        5,19231634




        5,19231634








        • 35




          I like the analogy, but this answer could use some facts to explain what is actually happening.
          – GolezTrol
          Aug 1 '16 at 21:07






        • 11




          It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
          – Ben Voigt
          Aug 1 '16 at 22:31






        • 28




          @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
          – Bob
          Aug 2 '16 at 4:38






        • 8




          OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
          – David Richerby
          Aug 3 '16 at 0:30








        • 5




          @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
          – David Richerby
          Aug 3 '16 at 10:25














        • 35




          I like the analogy, but this answer could use some facts to explain what is actually happening.
          – GolezTrol
          Aug 1 '16 at 21:07






        • 11




          It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
          – Ben Voigt
          Aug 1 '16 at 22:31






        • 28




          @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
          – Bob
          Aug 2 '16 at 4:38






        • 8




          OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
          – David Richerby
          Aug 3 '16 at 0:30








        • 5




          @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
          – David Richerby
          Aug 3 '16 at 10:25








        35




        35




        I like the analogy, but this answer could use some facts to explain what is actually happening.
        – GolezTrol
        Aug 1 '16 at 21:07




        I like the analogy, but this answer could use some facts to explain what is actually happening.
        – GolezTrol
        Aug 1 '16 at 21:07




        11




        11




        It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
        – Ben Voigt
        Aug 1 '16 at 22:31




        It doesn't help that the same analogy is used for DRAM, which has many orders of magnitude higher limit on write cycles.
        – Ben Voigt
        Aug 1 '16 at 22:31




        28




        28




        @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
        – Bob
        Aug 2 '16 at 4:38




        @BenVoigt Ok: DRAM is pencil + rubber eraser, flash is ink + ink eraser. The ink is more permanent, at the cost of the removal causing more damage. (Hey, that actually works pretty well for an analogy...)
        – Bob
        Aug 2 '16 at 4:38




        8




        8




        OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
        – David Richerby
        Aug 3 '16 at 0:30






        OK, great. I'm imagining a piece of paper and a pencil. But a flash memory is nothing like a piece of paper and a pencil, so how does that help? You might as well say, "Imagine your car. If you drive it enough, the engine will stop working." Simply giving another example of something that breaks after being used many times doesn't explain why this particular system breaks after being used many times.
        – David Richerby
        Aug 3 '16 at 0:30






        5




        5




        @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
        – David Richerby
        Aug 3 '16 at 10:25




        @Sahuagin But why is it like that? Why isn't it like a water bottle which I can fill and empty as many times as I want without any measurable erosion of the bottle? That's the problem with this analogy: it asks me to believe that a memory is like some other system but the only link between the two systems is the claim that the analogy works.
        – David Richerby
        Aug 3 '16 at 10:25










        up vote
        25
        down vote













        The problem is that the NAND flash substrate used suffers degradation on each erase. The erase process involves hitting the flash cell with a relatively large charge of electrical energy, this causes the semiconductor layer on the chip itself to degrade slightly.



        This damage on the long run, increase bit-error rates that can be corrected with software, but eventually the error correction code routines in the flash controller can't keep up with these errors and the flash cell becomes unreliable.






        share|improve this answer



















        • 1




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16










        • @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
          – Jules
          Aug 2 '16 at 7:46










        • It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
          – Toby Speight
          Aug 2 '16 at 10:07










        • Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
          – Toby Speight
          Aug 2 '16 at 10:09

















        up vote
        25
        down vote













        The problem is that the NAND flash substrate used suffers degradation on each erase. The erase process involves hitting the flash cell with a relatively large charge of electrical energy, this causes the semiconductor layer on the chip itself to degrade slightly.



        This damage on the long run, increase bit-error rates that can be corrected with software, but eventually the error correction code routines in the flash controller can't keep up with these errors and the flash cell becomes unreliable.






        share|improve this answer



















        • 1




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16










        • @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
          – Jules
          Aug 2 '16 at 7:46










        • It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
          – Toby Speight
          Aug 2 '16 at 10:07










        • Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
          – Toby Speight
          Aug 2 '16 at 10:09















        up vote
        25
        down vote










        up vote
        25
        down vote









        The problem is that the NAND flash substrate used suffers degradation on each erase. The erase process involves hitting the flash cell with a relatively large charge of electrical energy, this causes the semiconductor layer on the chip itself to degrade slightly.



        This damage on the long run, increase bit-error rates that can be corrected with software, but eventually the error correction code routines in the flash controller can't keep up with these errors and the flash cell becomes unreliable.






        share|improve this answer














        The problem is that the NAND flash substrate used suffers degradation on each erase. The erase process involves hitting the flash cell with a relatively large charge of electrical energy, this causes the semiconductor layer on the chip itself to degrade slightly.



        This damage on the long run, increase bit-error rates that can be corrected with software, but eventually the error correction code routines in the flash controller can't keep up with these errors and the flash cell becomes unreliable.







        share|improve this answer














        share|improve this answer



        share|improve this answer








        edited Aug 1 '16 at 10:39

























        answered Aug 1 '16 at 9:51









        jcbermu

        15.5k24354




        15.5k24354








        • 1




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16










        • @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
          – Jules
          Aug 2 '16 at 7:46










        • It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
          – Toby Speight
          Aug 2 '16 at 10:07










        • Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
          – Toby Speight
          Aug 2 '16 at 10:09
















        • 1




          The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
          – JDługosz
          Aug 1 '16 at 16:16










        • @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
          – Jules
          Aug 2 '16 at 7:46










        • It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
          – Toby Speight
          Aug 2 '16 at 10:07










        • Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
          – Toby Speight
          Aug 2 '16 at 10:09










        1




        1




        The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
        – JDługosz
        Aug 1 '16 at 16:16




        The limitation of flash write cycles is ot specific to NAND-type but is true for flash memory in general. E.g. en.wikipedia.org/wiki/Flash_memory#Write_endurance
        – JDługosz
        Aug 1 '16 at 16:16












        @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
        – Jules
        Aug 2 '16 at 7:46




        @JDługosz - while this is true, the fact that NOR flash can be erased & rewritten on a per-word rather than per-block basis means that the degradation will be slower in many cases, so is qualitively different, even if the mechanism is the same.
        – Jules
        Aug 2 '16 at 7:46












        It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
        – Toby Speight
        Aug 2 '16 at 10:07




        It's an important point that it's erase cycles that cause wear, and not write cycles. It's possible to take advantage of this to write several times to a region before erasing if you know your changes are cumulative (e.g. a bitmap of 'in-use' sectors can accumulate many writes before it needs to be reset).
        – Toby Speight
        Aug 2 '16 at 10:07












        Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
        – Toby Speight
        Aug 2 '16 at 10:09






        Example: the Empeg (later Rio) car MP3 player stores settings in a fixed-length slot; many of these fit in an erase block. When reading, it just picks up the latest one that has a valid checksum. The block only needs to be erased when every slot within the erase-block has been used, rather than every time the settings are written.
        – Toby Speight
        Aug 2 '16 at 10:09












        up vote
        11
        down vote













        My answer is taken from people with more knowledge than me!



        SSDs use what is called flash memory. A physical process occurs when data is written to a cell (electrons move in and out.) When this happens it erodes the physical structure. This process is pretty much like water erosion; eventually it's too much and the wall gives way. When this happens the cell is rendered useless.



        Another way is that these electrons can get "stuck," making it harder for the cell to be read correctly. The analogy for this is a lot of people talking at the same time, and it's hard to hear anyone. You may pick out one voice, but it may be the wrong one!



        SSDs try to spread the load evenly between its in use cells so that they wear down evenly. Eventually a cell will die and be marked as unavailable. SSDs have an area of "overprovisioned cells," i.e. spare cells (think substitutes in sport). When a cell dies, one of these are used instead. Eventually all these extra cells are used as well and the SSD will slowly become unreadable.



        Hopefully that was a consumer friendly answer!



        Edit: Source Here






        share|improve this answer



























          up vote
          11
          down vote













          My answer is taken from people with more knowledge than me!



          SSDs use what is called flash memory. A physical process occurs when data is written to a cell (electrons move in and out.) When this happens it erodes the physical structure. This process is pretty much like water erosion; eventually it's too much and the wall gives way. When this happens the cell is rendered useless.



          Another way is that these electrons can get "stuck," making it harder for the cell to be read correctly. The analogy for this is a lot of people talking at the same time, and it's hard to hear anyone. You may pick out one voice, but it may be the wrong one!



          SSDs try to spread the load evenly between its in use cells so that they wear down evenly. Eventually a cell will die and be marked as unavailable. SSDs have an area of "overprovisioned cells," i.e. spare cells (think substitutes in sport). When a cell dies, one of these are used instead. Eventually all these extra cells are used as well and the SSD will slowly become unreadable.



          Hopefully that was a consumer friendly answer!



          Edit: Source Here






          share|improve this answer

























            up vote
            11
            down vote










            up vote
            11
            down vote









            My answer is taken from people with more knowledge than me!



            SSDs use what is called flash memory. A physical process occurs when data is written to a cell (electrons move in and out.) When this happens it erodes the physical structure. This process is pretty much like water erosion; eventually it's too much and the wall gives way. When this happens the cell is rendered useless.



            Another way is that these electrons can get "stuck," making it harder for the cell to be read correctly. The analogy for this is a lot of people talking at the same time, and it's hard to hear anyone. You may pick out one voice, but it may be the wrong one!



            SSDs try to spread the load evenly between its in use cells so that they wear down evenly. Eventually a cell will die and be marked as unavailable. SSDs have an area of "overprovisioned cells," i.e. spare cells (think substitutes in sport). When a cell dies, one of these are used instead. Eventually all these extra cells are used as well and the SSD will slowly become unreadable.



            Hopefully that was a consumer friendly answer!



            Edit: Source Here






            share|improve this answer














            My answer is taken from people with more knowledge than me!



            SSDs use what is called flash memory. A physical process occurs when data is written to a cell (electrons move in and out.) When this happens it erodes the physical structure. This process is pretty much like water erosion; eventually it's too much and the wall gives way. When this happens the cell is rendered useless.



            Another way is that these electrons can get "stuck," making it harder for the cell to be read correctly. The analogy for this is a lot of people talking at the same time, and it's hard to hear anyone. You may pick out one voice, but it may be the wrong one!



            SSDs try to spread the load evenly between its in use cells so that they wear down evenly. Eventually a cell will die and be marked as unavailable. SSDs have an area of "overprovisioned cells," i.e. spare cells (think substitutes in sport). When a cell dies, one of these are used instead. Eventually all these extra cells are used as well and the SSD will slowly become unreadable.



            Hopefully that was a consumer friendly answer!



            Edit: Source Here







            share|improve this answer














            share|improve this answer



            share|improve this answer








            edited Aug 1 '16 at 17:20

























            answered Aug 1 '16 at 9:54









            Lister

            1,088419




            1,088419






















                up vote
                10
                down vote













                Nearly all consumer SSDs use a memory technology called NAND flash memory. The write endurance limit is due to the way flash memory works.



                Put simply, flash memory operates by storing electrons inside an insulating barrier. Reading a flash memory cell involves checking its charge level, so to retain stored data, the electron charge must remain stable over time. To increase storage density and reduce cost, most SSDs use flash memory that distinguishes between not just two possible charge levels (one bit per cell, SLC), but four (two bits per cell, MLC), eight (three bits per cell, TLC), or even 16 (four bits per cell, TLC).



                Writing to flash memory requires driving an elevated voltage to move electrons through the insulator, a process which gradually wears it down. As the insulation wears down, the cell is less able to keep its electron charge stable, eventually causing the cell to fail to retain data. With TLC and particularly QLC NAND, the cells are particularly sensitive to this charge drifting due to the need to distinguish among more levels to store multiple bits of data.



                To further increase storage density and reduce cost, the process used to manufacture flash memory has been scaled down dramatically, to as small as 15nm today—and smaller cells wear down faster. For planar NAND flash (not 3D NAND), this means that while SLC NAND can last tens or even hundreds of thousands of write cycles, MLC NAND is typically good for only about 3,000 cycles and TLC a mere 750 to 1,500 cycles.



                3D NAND, which stacks NAND cells one on top of another, can achieve higher storage density without having to shrink the cells as small, which enables higher write endurance. While Samsung has gone back to a 40nm process for its 3D NAND, other flash memory manufacturers such as Micron have decided to use small processes anyway (though not quite as small as planar NAND) to deliver maximum storage density and minimum cost. Typical endurance ratings for 3D TLC NAND are about 2,000 to 3,000 cycles, but can be higher in enterprise-class devices. 3D QLC NAND is typically rated for about 1,000 cycles.



                An emerging memory technology called 3D XPoint, developed by Intel and Micron, uses a completely different approach to storing data which is not subject to the endurance limitations of flash memory. 3D XPoint is also vastly faster than flash memory, fast enough to potentially replace DRAM as system memory. Intel will sell devices using 3D XPoint technology under the Optane brand, while Micron will market 3D XPoint devices under the QuantX brand. Consumer SSDs with this technology may hit the market as soon as 2017, although it is my belief that for cost reasons, 3D NAND (primarily of the TLC variety) will be the dominant form of mass storage for the next several years.






                share|improve this answer



























                  up vote
                  10
                  down vote













                  Nearly all consumer SSDs use a memory technology called NAND flash memory. The write endurance limit is due to the way flash memory works.



                  Put simply, flash memory operates by storing electrons inside an insulating barrier. Reading a flash memory cell involves checking its charge level, so to retain stored data, the electron charge must remain stable over time. To increase storage density and reduce cost, most SSDs use flash memory that distinguishes between not just two possible charge levels (one bit per cell, SLC), but four (two bits per cell, MLC), eight (three bits per cell, TLC), or even 16 (four bits per cell, TLC).



                  Writing to flash memory requires driving an elevated voltage to move electrons through the insulator, a process which gradually wears it down. As the insulation wears down, the cell is less able to keep its electron charge stable, eventually causing the cell to fail to retain data. With TLC and particularly QLC NAND, the cells are particularly sensitive to this charge drifting due to the need to distinguish among more levels to store multiple bits of data.



                  To further increase storage density and reduce cost, the process used to manufacture flash memory has been scaled down dramatically, to as small as 15nm today—and smaller cells wear down faster. For planar NAND flash (not 3D NAND), this means that while SLC NAND can last tens or even hundreds of thousands of write cycles, MLC NAND is typically good for only about 3,000 cycles and TLC a mere 750 to 1,500 cycles.



                  3D NAND, which stacks NAND cells one on top of another, can achieve higher storage density without having to shrink the cells as small, which enables higher write endurance. While Samsung has gone back to a 40nm process for its 3D NAND, other flash memory manufacturers such as Micron have decided to use small processes anyway (though not quite as small as planar NAND) to deliver maximum storage density and minimum cost. Typical endurance ratings for 3D TLC NAND are about 2,000 to 3,000 cycles, but can be higher in enterprise-class devices. 3D QLC NAND is typically rated for about 1,000 cycles.



                  An emerging memory technology called 3D XPoint, developed by Intel and Micron, uses a completely different approach to storing data which is not subject to the endurance limitations of flash memory. 3D XPoint is also vastly faster than flash memory, fast enough to potentially replace DRAM as system memory. Intel will sell devices using 3D XPoint technology under the Optane brand, while Micron will market 3D XPoint devices under the QuantX brand. Consumer SSDs with this technology may hit the market as soon as 2017, although it is my belief that for cost reasons, 3D NAND (primarily of the TLC variety) will be the dominant form of mass storage for the next several years.






                  share|improve this answer

























                    up vote
                    10
                    down vote










                    up vote
                    10
                    down vote









                    Nearly all consumer SSDs use a memory technology called NAND flash memory. The write endurance limit is due to the way flash memory works.



                    Put simply, flash memory operates by storing electrons inside an insulating barrier. Reading a flash memory cell involves checking its charge level, so to retain stored data, the electron charge must remain stable over time. To increase storage density and reduce cost, most SSDs use flash memory that distinguishes between not just two possible charge levels (one bit per cell, SLC), but four (two bits per cell, MLC), eight (three bits per cell, TLC), or even 16 (four bits per cell, TLC).



                    Writing to flash memory requires driving an elevated voltage to move electrons through the insulator, a process which gradually wears it down. As the insulation wears down, the cell is less able to keep its electron charge stable, eventually causing the cell to fail to retain data. With TLC and particularly QLC NAND, the cells are particularly sensitive to this charge drifting due to the need to distinguish among more levels to store multiple bits of data.



                    To further increase storage density and reduce cost, the process used to manufacture flash memory has been scaled down dramatically, to as small as 15nm today—and smaller cells wear down faster. For planar NAND flash (not 3D NAND), this means that while SLC NAND can last tens or even hundreds of thousands of write cycles, MLC NAND is typically good for only about 3,000 cycles and TLC a mere 750 to 1,500 cycles.



                    3D NAND, which stacks NAND cells one on top of another, can achieve higher storage density without having to shrink the cells as small, which enables higher write endurance. While Samsung has gone back to a 40nm process for its 3D NAND, other flash memory manufacturers such as Micron have decided to use small processes anyway (though not quite as small as planar NAND) to deliver maximum storage density and minimum cost. Typical endurance ratings for 3D TLC NAND are about 2,000 to 3,000 cycles, but can be higher in enterprise-class devices. 3D QLC NAND is typically rated for about 1,000 cycles.



                    An emerging memory technology called 3D XPoint, developed by Intel and Micron, uses a completely different approach to storing data which is not subject to the endurance limitations of flash memory. 3D XPoint is also vastly faster than flash memory, fast enough to potentially replace DRAM as system memory. Intel will sell devices using 3D XPoint technology under the Optane brand, while Micron will market 3D XPoint devices under the QuantX brand. Consumer SSDs with this technology may hit the market as soon as 2017, although it is my belief that for cost reasons, 3D NAND (primarily of the TLC variety) will be the dominant form of mass storage for the next several years.






                    share|improve this answer














                    Nearly all consumer SSDs use a memory technology called NAND flash memory. The write endurance limit is due to the way flash memory works.



                    Put simply, flash memory operates by storing electrons inside an insulating barrier. Reading a flash memory cell involves checking its charge level, so to retain stored data, the electron charge must remain stable over time. To increase storage density and reduce cost, most SSDs use flash memory that distinguishes between not just two possible charge levels (one bit per cell, SLC), but four (two bits per cell, MLC), eight (three bits per cell, TLC), or even 16 (four bits per cell, TLC).



                    Writing to flash memory requires driving an elevated voltage to move electrons through the insulator, a process which gradually wears it down. As the insulation wears down, the cell is less able to keep its electron charge stable, eventually causing the cell to fail to retain data. With TLC and particularly QLC NAND, the cells are particularly sensitive to this charge drifting due to the need to distinguish among more levels to store multiple bits of data.



                    To further increase storage density and reduce cost, the process used to manufacture flash memory has been scaled down dramatically, to as small as 15nm today—and smaller cells wear down faster. For planar NAND flash (not 3D NAND), this means that while SLC NAND can last tens or even hundreds of thousands of write cycles, MLC NAND is typically good for only about 3,000 cycles and TLC a mere 750 to 1,500 cycles.



                    3D NAND, which stacks NAND cells one on top of another, can achieve higher storage density without having to shrink the cells as small, which enables higher write endurance. While Samsung has gone back to a 40nm process for its 3D NAND, other flash memory manufacturers such as Micron have decided to use small processes anyway (though not quite as small as planar NAND) to deliver maximum storage density and minimum cost. Typical endurance ratings for 3D TLC NAND are about 2,000 to 3,000 cycles, but can be higher in enterprise-class devices. 3D QLC NAND is typically rated for about 1,000 cycles.



                    An emerging memory technology called 3D XPoint, developed by Intel and Micron, uses a completely different approach to storing data which is not subject to the endurance limitations of flash memory. 3D XPoint is also vastly faster than flash memory, fast enough to potentially replace DRAM as system memory. Intel will sell devices using 3D XPoint technology under the Optane brand, while Micron will market 3D XPoint devices under the QuantX brand. Consumer SSDs with this technology may hit the market as soon as 2017, although it is my belief that for cost reasons, 3D NAND (primarily of the TLC variety) will be the dominant form of mass storage for the next several years.







                    share|improve this answer














                    share|improve this answer



                    share|improve this answer








                    edited Nov 27 at 17:56

























                    answered Aug 2 '16 at 0:12









                    bwDraco

                    36.5k36135177




                    36.5k36135177






















                        up vote
                        5
                        down vote













                        A flash cell stores static electricity. It's exactly the same kind of charge that you can store on an inflated balloon: you place a few extra electrons on it.



                        What's special about static electricity is that it stays in place. Normally in electronics, everything is connected to everything else in some way with conductors, and even if there's a large resistor between a balloon and ground then the charge will vanish pretty quickly. The reason that a balloon stays charged is that air is actually an insulator: it has infinite resistivity.



                        Normally, that is. Since all matter consists of electrons and atom rumps, you can make anything a conductor: just apply enough energy, and some of the electrons will shake loose and be (for a short while) free to move closer to the balloon, or further from it. This actually happens in air with static electricity: we know this process as lightning!



                        I don't have to emphasise that lightning is a rather violent process. These electrons are a crucial part of the chemical structure of matter. In the case of air, lightning leaves a bit of the oxygen and nitrogen transformed to ozone and nitrogen dioxide. Only because the air keeps moving and mingling and those substances eventually react back to oxygen and nitrogen is the no “persistent harm” done, and the air is still an insulator.



                        Not so in case of a flash cell: here, the insulator must be way more compact. This is only feasible with solid-state oxide layers. Sturdy stuff, but it too isn't impervious to the effects of forcing some charge through the conductive material. And that's what eventually wrecks a flash cell, if you change its state too often.



                        By contrast, a DRAM cell doesn't have proper insulators in it. That's why it needs to be periodically refreshed, many times a second, to not lose information; however, because it's all just ordinary conductive charge transports, nothing much bad usually happens if you change the state of a RAM cell. Therefore, RAM endures many more read/write cycles than flash does.





                        Or, for a positive charge, you remove some electrons from the molecule bonds. You need to take so few that this doesn't affect the chemical structure in a detectable way.



                        These static charges are actually tiny. Even the smallest watch battery that lasts for years supplies enough charge every second to charge hundreds of balloons! It just doesn't have nearly enough voltage to punch through any noteworthy potential barrier.



                        At least, all matter on earth... let's not complicate things by going to neutron stars.






                        share|improve this answer

























                          up vote
                          5
                          down vote













                          A flash cell stores static electricity. It's exactly the same kind of charge that you can store on an inflated balloon: you place a few extra electrons on it.



                          What's special about static electricity is that it stays in place. Normally in electronics, everything is connected to everything else in some way with conductors, and even if there's a large resistor between a balloon and ground then the charge will vanish pretty quickly. The reason that a balloon stays charged is that air is actually an insulator: it has infinite resistivity.



                          Normally, that is. Since all matter consists of electrons and atom rumps, you can make anything a conductor: just apply enough energy, and some of the electrons will shake loose and be (for a short while) free to move closer to the balloon, or further from it. This actually happens in air with static electricity: we know this process as lightning!



                          I don't have to emphasise that lightning is a rather violent process. These electrons are a crucial part of the chemical structure of matter. In the case of air, lightning leaves a bit of the oxygen and nitrogen transformed to ozone and nitrogen dioxide. Only because the air keeps moving and mingling and those substances eventually react back to oxygen and nitrogen is the no “persistent harm” done, and the air is still an insulator.



                          Not so in case of a flash cell: here, the insulator must be way more compact. This is only feasible with solid-state oxide layers. Sturdy stuff, but it too isn't impervious to the effects of forcing some charge through the conductive material. And that's what eventually wrecks a flash cell, if you change its state too often.



                          By contrast, a DRAM cell doesn't have proper insulators in it. That's why it needs to be periodically refreshed, many times a second, to not lose information; however, because it's all just ordinary conductive charge transports, nothing much bad usually happens if you change the state of a RAM cell. Therefore, RAM endures many more read/write cycles than flash does.





                          Or, for a positive charge, you remove some electrons from the molecule bonds. You need to take so few that this doesn't affect the chemical structure in a detectable way.



                          These static charges are actually tiny. Even the smallest watch battery that lasts for years supplies enough charge every second to charge hundreds of balloons! It just doesn't have nearly enough voltage to punch through any noteworthy potential barrier.



                          At least, all matter on earth... let's not complicate things by going to neutron stars.






                          share|improve this answer























                            up vote
                            5
                            down vote










                            up vote
                            5
                            down vote









                            A flash cell stores static electricity. It's exactly the same kind of charge that you can store on an inflated balloon: you place a few extra electrons on it.



                            What's special about static electricity is that it stays in place. Normally in electronics, everything is connected to everything else in some way with conductors, and even if there's a large resistor between a balloon and ground then the charge will vanish pretty quickly. The reason that a balloon stays charged is that air is actually an insulator: it has infinite resistivity.



                            Normally, that is. Since all matter consists of electrons and atom rumps, you can make anything a conductor: just apply enough energy, and some of the electrons will shake loose and be (for a short while) free to move closer to the balloon, or further from it. This actually happens in air with static electricity: we know this process as lightning!



                            I don't have to emphasise that lightning is a rather violent process. These electrons are a crucial part of the chemical structure of matter. In the case of air, lightning leaves a bit of the oxygen and nitrogen transformed to ozone and nitrogen dioxide. Only because the air keeps moving and mingling and those substances eventually react back to oxygen and nitrogen is the no “persistent harm” done, and the air is still an insulator.



                            Not so in case of a flash cell: here, the insulator must be way more compact. This is only feasible with solid-state oxide layers. Sturdy stuff, but it too isn't impervious to the effects of forcing some charge through the conductive material. And that's what eventually wrecks a flash cell, if you change its state too often.



                            By contrast, a DRAM cell doesn't have proper insulators in it. That's why it needs to be periodically refreshed, many times a second, to not lose information; however, because it's all just ordinary conductive charge transports, nothing much bad usually happens if you change the state of a RAM cell. Therefore, RAM endures many more read/write cycles than flash does.





                            Or, for a positive charge, you remove some electrons from the molecule bonds. You need to take so few that this doesn't affect the chemical structure in a detectable way.



                            These static charges are actually tiny. Even the smallest watch battery that lasts for years supplies enough charge every second to charge hundreds of balloons! It just doesn't have nearly enough voltage to punch through any noteworthy potential barrier.



                            At least, all matter on earth... let's not complicate things by going to neutron stars.






                            share|improve this answer












                            A flash cell stores static electricity. It's exactly the same kind of charge that you can store on an inflated balloon: you place a few extra electrons on it.



                            What's special about static electricity is that it stays in place. Normally in electronics, everything is connected to everything else in some way with conductors, and even if there's a large resistor between a balloon and ground then the charge will vanish pretty quickly. The reason that a balloon stays charged is that air is actually an insulator: it has infinite resistivity.



                            Normally, that is. Since all matter consists of electrons and atom rumps, you can make anything a conductor: just apply enough energy, and some of the electrons will shake loose and be (for a short while) free to move closer to the balloon, or further from it. This actually happens in air with static electricity: we know this process as lightning!



                            I don't have to emphasise that lightning is a rather violent process. These electrons are a crucial part of the chemical structure of matter. In the case of air, lightning leaves a bit of the oxygen and nitrogen transformed to ozone and nitrogen dioxide. Only because the air keeps moving and mingling and those substances eventually react back to oxygen and nitrogen is the no “persistent harm” done, and the air is still an insulator.



                            Not so in case of a flash cell: here, the insulator must be way more compact. This is only feasible with solid-state oxide layers. Sturdy stuff, but it too isn't impervious to the effects of forcing some charge through the conductive material. And that's what eventually wrecks a flash cell, if you change its state too often.



                            By contrast, a DRAM cell doesn't have proper insulators in it. That's why it needs to be periodically refreshed, many times a second, to not lose information; however, because it's all just ordinary conductive charge transports, nothing much bad usually happens if you change the state of a RAM cell. Therefore, RAM endures many more read/write cycles than flash does.





                            Or, for a positive charge, you remove some electrons from the molecule bonds. You need to take so few that this doesn't affect the chemical structure in a detectable way.



                            These static charges are actually tiny. Even the smallest watch battery that lasts for years supplies enough charge every second to charge hundreds of balloons! It just doesn't have nearly enough voltage to punch through any noteworthy potential barrier.



                            At least, all matter on earth... let's not complicate things by going to neutron stars.







                            share|improve this answer












                            share|improve this answer



                            share|improve this answer










                            answered Aug 3 '16 at 15:31









                            leftaroundabout

                            237112




                            237112






















                                up vote
                                1
                                down vote













                                Less technical, and an answer to what I believe OP means by "I often see people mention that SSDs have a limited amount of writes in their sectors before they go bad, especially compared to classic rotating disk hard drives, where most drives fail due to mechanical failure, not sectors going bad."

                                I'll interpret the OP question as, "Since SSDs fail far more often than spinning rust, how can using one give a reasonable reliability?"



                                There are two types of reliability and failure. One is the thing fails completely due to age, quality, abuse, etc. Or, it may have a sector error due to lots of read/write.



                                Sector errors happen on all media. The drive controller (SSD or spinning) will re-map a failing sector data to a new sector. If it has failed completely, then it may still remap, but the data is lost. In SSD the sector is large and often fails completely.



                                SSDs can have one or both types of reliability. Read/write cycle issues can be helped with

                                having a larger drive. If you have a small drive and use it for OS like Windows, then it will get a lot of read/write cycles. The same OS on a much, much larger capacity drive will have fewer cycles. So, even a drive with "only" a few thousand cycles might not be a problem if each sector isn't erased frequently.

                                Balancing data - SSDs will move data from frequently used sectors to less frequently used ones. Think about the OS again, and updates, vs. a photo you took and just want to keep. At some point the SSD might swap the physical locations of the photo and an OS file to balance out the cycles.

                                Compression - compressing data takes less space, thus less writing.



                                Then there is quality of components. Getting the cheapest SSD or USB you can find might work for a while, but a quality one made for enterprise use will last a lot longer time, not just in erase cycles but in total use.



                                As drives get larger and larger (like 100-1000GB) then erase cycles become less of an issue even though they can sustain less writes. Some drives will use DRAM as a cache to help lower write cycles. Some will use a high-quality segment of the SSD for cache and lower quality for low cost and large size.



                                Modern good-quality consumer SSDs can last a good long time in a consumer machine. I have some 5+ years old that still work. I also have a couple of cheap, new ones that failed after a few months. Sometimes it is just (bad) luck.






                                share|improve this answer





















                                • A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                  – fixer1234
                                  Aug 4 '16 at 21:31










                                • In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                  – fixer1234
                                  Aug 4 '16 at 21:32

















                                up vote
                                1
                                down vote













                                Less technical, and an answer to what I believe OP means by "I often see people mention that SSDs have a limited amount of writes in their sectors before they go bad, especially compared to classic rotating disk hard drives, where most drives fail due to mechanical failure, not sectors going bad."

                                I'll interpret the OP question as, "Since SSDs fail far more often than spinning rust, how can using one give a reasonable reliability?"



                                There are two types of reliability and failure. One is the thing fails completely due to age, quality, abuse, etc. Or, it may have a sector error due to lots of read/write.



                                Sector errors happen on all media. The drive controller (SSD or spinning) will re-map a failing sector data to a new sector. If it has failed completely, then it may still remap, but the data is lost. In SSD the sector is large and often fails completely.



                                SSDs can have one or both types of reliability. Read/write cycle issues can be helped with

                                having a larger drive. If you have a small drive and use it for OS like Windows, then it will get a lot of read/write cycles. The same OS on a much, much larger capacity drive will have fewer cycles. So, even a drive with "only" a few thousand cycles might not be a problem if each sector isn't erased frequently.

                                Balancing data - SSDs will move data from frequently used sectors to less frequently used ones. Think about the OS again, and updates, vs. a photo you took and just want to keep. At some point the SSD might swap the physical locations of the photo and an OS file to balance out the cycles.

                                Compression - compressing data takes less space, thus less writing.



                                Then there is quality of components. Getting the cheapest SSD or USB you can find might work for a while, but a quality one made for enterprise use will last a lot longer time, not just in erase cycles but in total use.



                                As drives get larger and larger (like 100-1000GB) then erase cycles become less of an issue even though they can sustain less writes. Some drives will use DRAM as a cache to help lower write cycles. Some will use a high-quality segment of the SSD for cache and lower quality for low cost and large size.



                                Modern good-quality consumer SSDs can last a good long time in a consumer machine. I have some 5+ years old that still work. I also have a couple of cheap, new ones that failed after a few months. Sometimes it is just (bad) luck.






                                share|improve this answer





















                                • A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                  – fixer1234
                                  Aug 4 '16 at 21:31










                                • In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                  – fixer1234
                                  Aug 4 '16 at 21:32















                                up vote
                                1
                                down vote










                                up vote
                                1
                                down vote









                                Less technical, and an answer to what I believe OP means by "I often see people mention that SSDs have a limited amount of writes in their sectors before they go bad, especially compared to classic rotating disk hard drives, where most drives fail due to mechanical failure, not sectors going bad."

                                I'll interpret the OP question as, "Since SSDs fail far more often than spinning rust, how can using one give a reasonable reliability?"



                                There are two types of reliability and failure. One is the thing fails completely due to age, quality, abuse, etc. Or, it may have a sector error due to lots of read/write.



                                Sector errors happen on all media. The drive controller (SSD or spinning) will re-map a failing sector data to a new sector. If it has failed completely, then it may still remap, but the data is lost. In SSD the sector is large and often fails completely.



                                SSDs can have one or both types of reliability. Read/write cycle issues can be helped with

                                having a larger drive. If you have a small drive and use it for OS like Windows, then it will get a lot of read/write cycles. The same OS on a much, much larger capacity drive will have fewer cycles. So, even a drive with "only" a few thousand cycles might not be a problem if each sector isn't erased frequently.

                                Balancing data - SSDs will move data from frequently used sectors to less frequently used ones. Think about the OS again, and updates, vs. a photo you took and just want to keep. At some point the SSD might swap the physical locations of the photo and an OS file to balance out the cycles.

                                Compression - compressing data takes less space, thus less writing.



                                Then there is quality of components. Getting the cheapest SSD or USB you can find might work for a while, but a quality one made for enterprise use will last a lot longer time, not just in erase cycles but in total use.



                                As drives get larger and larger (like 100-1000GB) then erase cycles become less of an issue even though they can sustain less writes. Some drives will use DRAM as a cache to help lower write cycles. Some will use a high-quality segment of the SSD for cache and lower quality for low cost and large size.



                                Modern good-quality consumer SSDs can last a good long time in a consumer machine. I have some 5+ years old that still work. I also have a couple of cheap, new ones that failed after a few months. Sometimes it is just (bad) luck.






                                share|improve this answer












                                Less technical, and an answer to what I believe OP means by "I often see people mention that SSDs have a limited amount of writes in their sectors before they go bad, especially compared to classic rotating disk hard drives, where most drives fail due to mechanical failure, not sectors going bad."

                                I'll interpret the OP question as, "Since SSDs fail far more often than spinning rust, how can using one give a reasonable reliability?"



                                There are two types of reliability and failure. One is the thing fails completely due to age, quality, abuse, etc. Or, it may have a sector error due to lots of read/write.



                                Sector errors happen on all media. The drive controller (SSD or spinning) will re-map a failing sector data to a new sector. If it has failed completely, then it may still remap, but the data is lost. In SSD the sector is large and often fails completely.



                                SSDs can have one or both types of reliability. Read/write cycle issues can be helped with

                                having a larger drive. If you have a small drive and use it for OS like Windows, then it will get a lot of read/write cycles. The same OS on a much, much larger capacity drive will have fewer cycles. So, even a drive with "only" a few thousand cycles might not be a problem if each sector isn't erased frequently.

                                Balancing data - SSDs will move data from frequently used sectors to less frequently used ones. Think about the OS again, and updates, vs. a photo you took and just want to keep. At some point the SSD might swap the physical locations of the photo and an OS file to balance out the cycles.

                                Compression - compressing data takes less space, thus less writing.



                                Then there is quality of components. Getting the cheapest SSD or USB you can find might work for a while, but a quality one made for enterprise use will last a lot longer time, not just in erase cycles but in total use.



                                As drives get larger and larger (like 100-1000GB) then erase cycles become less of an issue even though they can sustain less writes. Some drives will use DRAM as a cache to help lower write cycles. Some will use a high-quality segment of the SSD for cache and lower quality for low cost and large size.



                                Modern good-quality consumer SSDs can last a good long time in a consumer machine. I have some 5+ years old that still work. I also have a couple of cheap, new ones that failed after a few months. Sometimes it is just (bad) luck.







                                share|improve this answer












                                share|improve this answer



                                share|improve this answer










                                answered Aug 4 '16 at 20:31









                                MikeP

                                1213




                                1213












                                • A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                  – fixer1234
                                  Aug 4 '16 at 21:31










                                • In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                  – fixer1234
                                  Aug 4 '16 at 21:32




















                                • A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                  – fixer1234
                                  Aug 4 '16 at 21:31










                                • In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                  – fixer1234
                                  Aug 4 '16 at 21:32


















                                A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                – fixer1234
                                Aug 4 '16 at 21:31




                                A couple of minor points to consider clarifying: 1) Sector size in 3rd paragraph: in either media, it can be a very small area of actual failure. The drive works in fixed-size units so no matter how small the failure is, it still locks and maps based on the smallest unit it deals with. 2) Number of cycles vs. drive size in 4th paragraph: The number of cycles is the same regardless of drive size. You're talking about the potential need to reuse blocks more if the amount of data is large relative to the size of the drive. (cont'd)
                                – fixer1234
                                Aug 4 '16 at 21:31












                                In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                – fixer1234
                                Aug 4 '16 at 21:32






                                In general, your answer focuses more on how the limited writes are dealt with and how significant the issue is than the actual question of what causes the limited number of writes.
                                – fixer1234
                                Aug 4 '16 at 21:32




















                                draft saved

                                draft discarded




















































                                Thanks for contributing an answer to Super User!


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.





                                Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                                Please pay close attention to the following guidance:


                                • Please be sure to answer the question. Provide details and share your research!

                                But avoid



                                • Asking for help, clarification, or responding to other answers.

                                • Making statements based on opinion; back them up with references or personal experience.


                                To learn more, see our tips on writing great answers.




                                draft saved


                                draft discarded














                                StackExchange.ready(
                                function () {
                                StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fsuperuser.com%2fquestions%2f1107320%2fwhy-do-ssd-sectors-have-limited-write-endurance%23new-answer', 'question_page');
                                }
                                );

                                Post as a guest















                                Required, but never shown





















































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown

































                                Required, but never shown














                                Required, but never shown












                                Required, but never shown







                                Required, but never shown







                                Popular posts from this blog

                                Plaza Victoria

                                In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                                How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...