What are the Memory Locations in Random Access Memory? [closed]





.everyoneloves__top-leaderboard:empty,.everyoneloves__mid-leaderboard:empty,.everyoneloves__bot-mid-leaderboard:empty{ height:90px;width:728px;box-sizing:border-box;
}







-1















What is the meaning of memory locations in Ram. I really do not understand the definition of the word memory location in RAM. Tell me which English dictionary in google you used to find the meaning of the word Memory location in Random Access Memory?










share|improve this question













closed as unclear what you're asking by Daniel B, music2myear, Twisty Impersonator, Debra, Ramhound Feb 9 at 14:27


Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.

























    -1















    What is the meaning of memory locations in Ram. I really do not understand the definition of the word memory location in RAM. Tell me which English dictionary in google you used to find the meaning of the word Memory location in Random Access Memory?










    share|improve this question













    closed as unclear what you're asking by Daniel B, music2myear, Twisty Impersonator, Debra, Ramhound Feb 9 at 14:27


    Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.





















      -1












      -1








      -1








      What is the meaning of memory locations in Ram. I really do not understand the definition of the word memory location in RAM. Tell me which English dictionary in google you used to find the meaning of the word Memory location in Random Access Memory?










      share|improve this question














      What is the meaning of memory locations in Ram. I really do not understand the definition of the word memory location in RAM. Tell me which English dictionary in google you used to find the meaning of the word Memory location in Random Access Memory?







      computer-architecture computer-science






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Feb 8 at 21:00









      ALLAN KIZAALLAN KIZA

      4




      4




      closed as unclear what you're asking by Daniel B, music2myear, Twisty Impersonator, Debra, Ramhound Feb 9 at 14:27


      Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.









      closed as unclear what you're asking by Daniel B, music2myear, Twisty Impersonator, Debra, Ramhound Feb 9 at 14:27


      Please clarify your specific problem or add additional details to highlight exactly what you need. As it's currently written, it’s hard to tell exactly what you're asking. See the How to Ask page for help clarifying this question. If this question can be reworded to fit the rules in the help center, please edit the question.
























          2 Answers
          2






          active

          oldest

          votes


















          2














          Memory Locations - better known as Addresses - are a complex topic and a dictionary or encyclopedic definition alone won't be enough to convey their exact nature. I'll attempt to cover physical memory addresses - which these days differ from logical memory addresses thanks to a computer's MMU.



          Fundamentally, computers make use of clever arrangements of Boolean Logic gates(represented physically by nanoscopic transistors) to store tiny amounts of information. Elementary Logic Gates like AND, OR, NOR, etc. are grouped together in what's called a Latch, so-called since it "latches" onto a given piece of data. This could be thought of as the lowest level of arrangements and can only remember a 1 or 0, true or false. These are tiny circuits where "remembering" a 1 or 0 is represented by the presence of current in a circuit, or not. The circuit is conceived in such a way where it's able to reliably preserve a current in its system, or not. There are other components necessary to this arrangement, notably, an input for that same circuit being "write enabled".



          So now we can store 1 bit of memory - a 1 or 0, true or false - which isn't very useful. For perspective, to store the number "5" in memory, you would need at least three of these components working together to store 101. See this binary to decimal table. A group of latches operating together to store a single number is called a register, and the number of bits(latches) in the register is called its width. If we group 8 latches together, we can use our newly-made 8-bit register to remember numbers up to 11111111 or 255.



          Since there are more and more circuits needed to access all the latches in a given register, another clever arrangement is exploited to cut down on the number of individual circuits. This new arrangement is in the form of a matrix.



          Computing owes its existence to the relatively recent combination of microscopic and nanoscopic manufacturing techniques, and also clever arrangements of circuits to allow for more data to be represented on smaller and smaller components.



          And now we arrive at memory addresses - or physical addresses for our purposes - are simply a way to locate a given latch within its matrix. A matrix can be thought of as a series of rows and columns, like an Excel spreadsheet. Albeit a shallow analogy, we nevertheless represent rows and columns with a four-bit binary number each, adding up to an 8-bit address within our simple example.



          Additional resources:




          • This 12-minute video from SciShow, which brilliantly illustrates the process in greater detail

          • This in-depth and technical course extract from the University of Texas






          share|improve this answer

































            0














            [Some of this is simplified to provide a high-level view]



            CPUs have load and store instructions that read and write data from addresses. Any program you run that uses RAM will use these instructions.



            These addresses start at 0 and the highest address you can specify depends on the type and "bitness" of a CPU.



            On 64-bit CPUs the highest address is 264 minus 1, or 18446744073709551615.



            On 32-bit CPUs the highest address is 232 minus 1, or 4294967295..



            (For fun: on old 8-bit CPUs, like the 6502-compatible one in the old NES--which usually had 16 address lines--the highest address is 216 minus 1, or 65535.)



            The typical, "traditional" thing you are doing when you use load and store instructions is reading and writing from RAM or ROM.



            Of course, your installed RAM will appear at some of those addresses. Not all of them will be RAM on a modern 64-bit CPU because there's no way you can install 17179869184 GB of RAM in a system yet (but 32-bit systems have been maxed out for a long time).



            The UEFI or BIOS ROM will appear at some of those, so that the CPU can have something to do when it powers on.



            Some addresses are connected to hardware devices--by reading and writing certain addresses the behavior of a hardware device can be setup, modified, or data can be given/taken from it.



            Some addresses are expected to hold information important to the CPU itself, like exception/interrupt vectors and various data structures relating to MMU/paging, VM control, and "enclave" control for Intel's "SGX" features.



            Some addresses won't be "backed" by anything and might cause a system lockup when accessed, or return random data.



            Speaking of the MMU, it can change the CPU's view of RAM--making what physically lives at given addresses appear elsewhere through a mechanism called "paging". Kernel mode in the CPU can change this mapping, processes not running on kernel mode have to use whatever paging is setup by the kernel. So a process that is not running in kernel mode will see a "virtual" view of the address space setup by the kernel, isolating and protecting it from overwriting other programs, and allowing multiple processes to run on a single CPU.






            share|improve this answer






























              2 Answers
              2






              active

              oldest

              votes








              2 Answers
              2






              active

              oldest

              votes









              active

              oldest

              votes






              active

              oldest

              votes









              2














              Memory Locations - better known as Addresses - are a complex topic and a dictionary or encyclopedic definition alone won't be enough to convey their exact nature. I'll attempt to cover physical memory addresses - which these days differ from logical memory addresses thanks to a computer's MMU.



              Fundamentally, computers make use of clever arrangements of Boolean Logic gates(represented physically by nanoscopic transistors) to store tiny amounts of information. Elementary Logic Gates like AND, OR, NOR, etc. are grouped together in what's called a Latch, so-called since it "latches" onto a given piece of data. This could be thought of as the lowest level of arrangements and can only remember a 1 or 0, true or false. These are tiny circuits where "remembering" a 1 or 0 is represented by the presence of current in a circuit, or not. The circuit is conceived in such a way where it's able to reliably preserve a current in its system, or not. There are other components necessary to this arrangement, notably, an input for that same circuit being "write enabled".



              So now we can store 1 bit of memory - a 1 or 0, true or false - which isn't very useful. For perspective, to store the number "5" in memory, you would need at least three of these components working together to store 101. See this binary to decimal table. A group of latches operating together to store a single number is called a register, and the number of bits(latches) in the register is called its width. If we group 8 latches together, we can use our newly-made 8-bit register to remember numbers up to 11111111 or 255.



              Since there are more and more circuits needed to access all the latches in a given register, another clever arrangement is exploited to cut down on the number of individual circuits. This new arrangement is in the form of a matrix.



              Computing owes its existence to the relatively recent combination of microscopic and nanoscopic manufacturing techniques, and also clever arrangements of circuits to allow for more data to be represented on smaller and smaller components.



              And now we arrive at memory addresses - or physical addresses for our purposes - are simply a way to locate a given latch within its matrix. A matrix can be thought of as a series of rows and columns, like an Excel spreadsheet. Albeit a shallow analogy, we nevertheless represent rows and columns with a four-bit binary number each, adding up to an 8-bit address within our simple example.



              Additional resources:




              • This 12-minute video from SciShow, which brilliantly illustrates the process in greater detail

              • This in-depth and technical course extract from the University of Texas






              share|improve this answer






























                2














                Memory Locations - better known as Addresses - are a complex topic and a dictionary or encyclopedic definition alone won't be enough to convey their exact nature. I'll attempt to cover physical memory addresses - which these days differ from logical memory addresses thanks to a computer's MMU.



                Fundamentally, computers make use of clever arrangements of Boolean Logic gates(represented physically by nanoscopic transistors) to store tiny amounts of information. Elementary Logic Gates like AND, OR, NOR, etc. are grouped together in what's called a Latch, so-called since it "latches" onto a given piece of data. This could be thought of as the lowest level of arrangements and can only remember a 1 or 0, true or false. These are tiny circuits where "remembering" a 1 or 0 is represented by the presence of current in a circuit, or not. The circuit is conceived in such a way where it's able to reliably preserve a current in its system, or not. There are other components necessary to this arrangement, notably, an input for that same circuit being "write enabled".



                So now we can store 1 bit of memory - a 1 or 0, true or false - which isn't very useful. For perspective, to store the number "5" in memory, you would need at least three of these components working together to store 101. See this binary to decimal table. A group of latches operating together to store a single number is called a register, and the number of bits(latches) in the register is called its width. If we group 8 latches together, we can use our newly-made 8-bit register to remember numbers up to 11111111 or 255.



                Since there are more and more circuits needed to access all the latches in a given register, another clever arrangement is exploited to cut down on the number of individual circuits. This new arrangement is in the form of a matrix.



                Computing owes its existence to the relatively recent combination of microscopic and nanoscopic manufacturing techniques, and also clever arrangements of circuits to allow for more data to be represented on smaller and smaller components.



                And now we arrive at memory addresses - or physical addresses for our purposes - are simply a way to locate a given latch within its matrix. A matrix can be thought of as a series of rows and columns, like an Excel spreadsheet. Albeit a shallow analogy, we nevertheless represent rows and columns with a four-bit binary number each, adding up to an 8-bit address within our simple example.



                Additional resources:




                • This 12-minute video from SciShow, which brilliantly illustrates the process in greater detail

                • This in-depth and technical course extract from the University of Texas






                share|improve this answer




























                  2












                  2








                  2







                  Memory Locations - better known as Addresses - are a complex topic and a dictionary or encyclopedic definition alone won't be enough to convey their exact nature. I'll attempt to cover physical memory addresses - which these days differ from logical memory addresses thanks to a computer's MMU.



                  Fundamentally, computers make use of clever arrangements of Boolean Logic gates(represented physically by nanoscopic transistors) to store tiny amounts of information. Elementary Logic Gates like AND, OR, NOR, etc. are grouped together in what's called a Latch, so-called since it "latches" onto a given piece of data. This could be thought of as the lowest level of arrangements and can only remember a 1 or 0, true or false. These are tiny circuits where "remembering" a 1 or 0 is represented by the presence of current in a circuit, or not. The circuit is conceived in such a way where it's able to reliably preserve a current in its system, or not. There are other components necessary to this arrangement, notably, an input for that same circuit being "write enabled".



                  So now we can store 1 bit of memory - a 1 or 0, true or false - which isn't very useful. For perspective, to store the number "5" in memory, you would need at least three of these components working together to store 101. See this binary to decimal table. A group of latches operating together to store a single number is called a register, and the number of bits(latches) in the register is called its width. If we group 8 latches together, we can use our newly-made 8-bit register to remember numbers up to 11111111 or 255.



                  Since there are more and more circuits needed to access all the latches in a given register, another clever arrangement is exploited to cut down on the number of individual circuits. This new arrangement is in the form of a matrix.



                  Computing owes its existence to the relatively recent combination of microscopic and nanoscopic manufacturing techniques, and also clever arrangements of circuits to allow for more data to be represented on smaller and smaller components.



                  And now we arrive at memory addresses - or physical addresses for our purposes - are simply a way to locate a given latch within its matrix. A matrix can be thought of as a series of rows and columns, like an Excel spreadsheet. Albeit a shallow analogy, we nevertheless represent rows and columns with a four-bit binary number each, adding up to an 8-bit address within our simple example.



                  Additional resources:




                  • This 12-minute video from SciShow, which brilliantly illustrates the process in greater detail

                  • This in-depth and technical course extract from the University of Texas






                  share|improve this answer















                  Memory Locations - better known as Addresses - are a complex topic and a dictionary or encyclopedic definition alone won't be enough to convey their exact nature. I'll attempt to cover physical memory addresses - which these days differ from logical memory addresses thanks to a computer's MMU.



                  Fundamentally, computers make use of clever arrangements of Boolean Logic gates(represented physically by nanoscopic transistors) to store tiny amounts of information. Elementary Logic Gates like AND, OR, NOR, etc. are grouped together in what's called a Latch, so-called since it "latches" onto a given piece of data. This could be thought of as the lowest level of arrangements and can only remember a 1 or 0, true or false. These are tiny circuits where "remembering" a 1 or 0 is represented by the presence of current in a circuit, or not. The circuit is conceived in such a way where it's able to reliably preserve a current in its system, or not. There are other components necessary to this arrangement, notably, an input for that same circuit being "write enabled".



                  So now we can store 1 bit of memory - a 1 or 0, true or false - which isn't very useful. For perspective, to store the number "5" in memory, you would need at least three of these components working together to store 101. See this binary to decimal table. A group of latches operating together to store a single number is called a register, and the number of bits(latches) in the register is called its width. If we group 8 latches together, we can use our newly-made 8-bit register to remember numbers up to 11111111 or 255.



                  Since there are more and more circuits needed to access all the latches in a given register, another clever arrangement is exploited to cut down on the number of individual circuits. This new arrangement is in the form of a matrix.



                  Computing owes its existence to the relatively recent combination of microscopic and nanoscopic manufacturing techniques, and also clever arrangements of circuits to allow for more data to be represented on smaller and smaller components.



                  And now we arrive at memory addresses - or physical addresses for our purposes - are simply a way to locate a given latch within its matrix. A matrix can be thought of as a series of rows and columns, like an Excel spreadsheet. Albeit a shallow analogy, we nevertheless represent rows and columns with a four-bit binary number each, adding up to an 8-bit address within our simple example.



                  Additional resources:




                  • This 12-minute video from SciShow, which brilliantly illustrates the process in greater detail

                  • This in-depth and technical course extract from the University of Texas







                  share|improve this answer














                  share|improve this answer



                  share|improve this answer








                  edited Feb 9 at 3:36

























                  answered Feb 9 at 0:12









                  baelxbaelx

                  1,917817




                  1,917817

























                      0














                      [Some of this is simplified to provide a high-level view]



                      CPUs have load and store instructions that read and write data from addresses. Any program you run that uses RAM will use these instructions.



                      These addresses start at 0 and the highest address you can specify depends on the type and "bitness" of a CPU.



                      On 64-bit CPUs the highest address is 264 minus 1, or 18446744073709551615.



                      On 32-bit CPUs the highest address is 232 minus 1, or 4294967295..



                      (For fun: on old 8-bit CPUs, like the 6502-compatible one in the old NES--which usually had 16 address lines--the highest address is 216 minus 1, or 65535.)



                      The typical, "traditional" thing you are doing when you use load and store instructions is reading and writing from RAM or ROM.



                      Of course, your installed RAM will appear at some of those addresses. Not all of them will be RAM on a modern 64-bit CPU because there's no way you can install 17179869184 GB of RAM in a system yet (but 32-bit systems have been maxed out for a long time).



                      The UEFI or BIOS ROM will appear at some of those, so that the CPU can have something to do when it powers on.



                      Some addresses are connected to hardware devices--by reading and writing certain addresses the behavior of a hardware device can be setup, modified, or data can be given/taken from it.



                      Some addresses are expected to hold information important to the CPU itself, like exception/interrupt vectors and various data structures relating to MMU/paging, VM control, and "enclave" control for Intel's "SGX" features.



                      Some addresses won't be "backed" by anything and might cause a system lockup when accessed, or return random data.



                      Speaking of the MMU, it can change the CPU's view of RAM--making what physically lives at given addresses appear elsewhere through a mechanism called "paging". Kernel mode in the CPU can change this mapping, processes not running on kernel mode have to use whatever paging is setup by the kernel. So a process that is not running in kernel mode will see a "virtual" view of the address space setup by the kernel, isolating and protecting it from overwriting other programs, and allowing multiple processes to run on a single CPU.






                      share|improve this answer




























                        0














                        [Some of this is simplified to provide a high-level view]



                        CPUs have load and store instructions that read and write data from addresses. Any program you run that uses RAM will use these instructions.



                        These addresses start at 0 and the highest address you can specify depends on the type and "bitness" of a CPU.



                        On 64-bit CPUs the highest address is 264 minus 1, or 18446744073709551615.



                        On 32-bit CPUs the highest address is 232 minus 1, or 4294967295..



                        (For fun: on old 8-bit CPUs, like the 6502-compatible one in the old NES--which usually had 16 address lines--the highest address is 216 minus 1, or 65535.)



                        The typical, "traditional" thing you are doing when you use load and store instructions is reading and writing from RAM or ROM.



                        Of course, your installed RAM will appear at some of those addresses. Not all of them will be RAM on a modern 64-bit CPU because there's no way you can install 17179869184 GB of RAM in a system yet (but 32-bit systems have been maxed out for a long time).



                        The UEFI or BIOS ROM will appear at some of those, so that the CPU can have something to do when it powers on.



                        Some addresses are connected to hardware devices--by reading and writing certain addresses the behavior of a hardware device can be setup, modified, or data can be given/taken from it.



                        Some addresses are expected to hold information important to the CPU itself, like exception/interrupt vectors and various data structures relating to MMU/paging, VM control, and "enclave" control for Intel's "SGX" features.



                        Some addresses won't be "backed" by anything and might cause a system lockup when accessed, or return random data.



                        Speaking of the MMU, it can change the CPU's view of RAM--making what physically lives at given addresses appear elsewhere through a mechanism called "paging". Kernel mode in the CPU can change this mapping, processes not running on kernel mode have to use whatever paging is setup by the kernel. So a process that is not running in kernel mode will see a "virtual" view of the address space setup by the kernel, isolating and protecting it from overwriting other programs, and allowing multiple processes to run on a single CPU.






                        share|improve this answer


























                          0












                          0








                          0







                          [Some of this is simplified to provide a high-level view]



                          CPUs have load and store instructions that read and write data from addresses. Any program you run that uses RAM will use these instructions.



                          These addresses start at 0 and the highest address you can specify depends on the type and "bitness" of a CPU.



                          On 64-bit CPUs the highest address is 264 minus 1, or 18446744073709551615.



                          On 32-bit CPUs the highest address is 232 minus 1, or 4294967295..



                          (For fun: on old 8-bit CPUs, like the 6502-compatible one in the old NES--which usually had 16 address lines--the highest address is 216 minus 1, or 65535.)



                          The typical, "traditional" thing you are doing when you use load and store instructions is reading and writing from RAM or ROM.



                          Of course, your installed RAM will appear at some of those addresses. Not all of them will be RAM on a modern 64-bit CPU because there's no way you can install 17179869184 GB of RAM in a system yet (but 32-bit systems have been maxed out for a long time).



                          The UEFI or BIOS ROM will appear at some of those, so that the CPU can have something to do when it powers on.



                          Some addresses are connected to hardware devices--by reading and writing certain addresses the behavior of a hardware device can be setup, modified, or data can be given/taken from it.



                          Some addresses are expected to hold information important to the CPU itself, like exception/interrupt vectors and various data structures relating to MMU/paging, VM control, and "enclave" control for Intel's "SGX" features.



                          Some addresses won't be "backed" by anything and might cause a system lockup when accessed, or return random data.



                          Speaking of the MMU, it can change the CPU's view of RAM--making what physically lives at given addresses appear elsewhere through a mechanism called "paging". Kernel mode in the CPU can change this mapping, processes not running on kernel mode have to use whatever paging is setup by the kernel. So a process that is not running in kernel mode will see a "virtual" view of the address space setup by the kernel, isolating and protecting it from overwriting other programs, and allowing multiple processes to run on a single CPU.






                          share|improve this answer













                          [Some of this is simplified to provide a high-level view]



                          CPUs have load and store instructions that read and write data from addresses. Any program you run that uses RAM will use these instructions.



                          These addresses start at 0 and the highest address you can specify depends on the type and "bitness" of a CPU.



                          On 64-bit CPUs the highest address is 264 minus 1, or 18446744073709551615.



                          On 32-bit CPUs the highest address is 232 minus 1, or 4294967295..



                          (For fun: on old 8-bit CPUs, like the 6502-compatible one in the old NES--which usually had 16 address lines--the highest address is 216 minus 1, or 65535.)



                          The typical, "traditional" thing you are doing when you use load and store instructions is reading and writing from RAM or ROM.



                          Of course, your installed RAM will appear at some of those addresses. Not all of them will be RAM on a modern 64-bit CPU because there's no way you can install 17179869184 GB of RAM in a system yet (but 32-bit systems have been maxed out for a long time).



                          The UEFI or BIOS ROM will appear at some of those, so that the CPU can have something to do when it powers on.



                          Some addresses are connected to hardware devices--by reading and writing certain addresses the behavior of a hardware device can be setup, modified, or data can be given/taken from it.



                          Some addresses are expected to hold information important to the CPU itself, like exception/interrupt vectors and various data structures relating to MMU/paging, VM control, and "enclave" control for Intel's "SGX" features.



                          Some addresses won't be "backed" by anything and might cause a system lockup when accessed, or return random data.



                          Speaking of the MMU, it can change the CPU's view of RAM--making what physically lives at given addresses appear elsewhere through a mechanism called "paging". Kernel mode in the CPU can change this mapping, processes not running on kernel mode have to use whatever paging is setup by the kernel. So a process that is not running in kernel mode will see a "virtual" view of the address space setup by the kernel, isolating and protecting it from overwriting other programs, and allowing multiple processes to run on a single CPU.







                          share|improve this answer












                          share|improve this answer



                          share|improve this answer










                          answered Feb 9 at 0:59









                          LawrenceCLawrenceC

                          60k11104182




                          60k11104182















                              Popular posts from this blog

                              Plaza Victoria

                              In PowerPoint, is there a keyboard shortcut for bulleted / numbered list?

                              How to put 3 figures in Latex with 2 figures side by side and 1 below these side by side images but in...