Memory, RAM, ROM, DRAM, SDRAM, System Memory

image There are various ways to say it, but they all contain the word “memory” and they all mean essentially the same thing.  Computers have memory.  This is not to be confused with storage.  Storage is where data is stored for longer term, such as on a hard disk, floppy disk, tape, CD, DVD, flash drive, or other external, long term data storage technology.

Memory is not the same thing as storage.  Memory is electronicimage and most kinds of memory require a live electrical source to retain its data.  If the power is cut off, the data in memory is lost almost instantly.  Memory is also MUCH faster than storage.  It’s also MUCH more expensive per byte.  As such, storage is usually MUCH larger in capacity than memory.

image Computer processors cannot execute programs stored on storage directly.  Programs must first be copied from storage into memory before the CPU can transfer control to the instructions in the program.  Data that a program uses, whether it’s a word processing document, a spreadsheet, a photo for editing, or anything else, must be copied from storage into the computer’s memory before the CPU can see and work with the data.

image In computer technology, memory is the electrical hardware that stores data.  This data is in the form of electrical charges.  These charges are either there or aren’t.  A single charge (or lack there of) is called a bit.  Multiple bits are grouped together in groups of 8 bits.  A group of 8 bits is called a byte.  Most processors are designed to be able to read or write one byte at a time.  In fact, every byte in a computer’s memory has its own address, similar to a postal address on a mailbox on a street lined with houses.  A single bit generally represents a value of 1 (a charge) or 0 (no charge).  So a single bit can hold exactly one value (either 1 or 0) and no more.  It cannot hold a value of 2 or 3 or anything else.  2 bits can hold a single value between 0 and 3 inclusive (1 value out of 4 possibilities (0, 1, 2, 3)) because with 2 bits of on and off charges, there are 4 possible combinations.  Every time you add a bit, you double the amount of possibilities, so 3 bits can represent a number between 0 and 5 (6 possibilities).  4 bits (called a nibble) can represent a number as large as 15 (16 possible outcomes).  8 bits (or a byte) can hold one of 256 different possible combinations of 1’s and 0’s.  The largest number that can be represented in a byte is 255.  2 bytes (16 bits, called a word) can represent a number as large as 65,535 (65,536 combinations of 1’s and 0’s).  3 bytes (24 bits) can represent a number as large as about 16.7 million.  4 bytes (32 bits, called a dword for “double word”) can represent a number as large as about 4.2 billion.

Most modern processors have an 8 byte (64 bit) memory image address register.  This means that the processor can store something like a zip code, made out of 8 bytes (64 bits) to represent a single address to a single byte of memory.  With an address size as large as 8 bytes, this means the CPU can uniquely address up to 64^2 addresses (9,223,372,036,854,775,807 * 2 (sorry, my calculator doesn’t go that far)).  That’s about 18.4 Septillion bytes of memory!!!  Now, just because the CPU can address that much memory, doesn’t mean that it has that much memory.  As of Late 2009, Most modern PCs don’t have much more than about 4GB (4 Gigabytes (4.2 billion bytes)) of memory.

image Memory is measured in bytes, not bits.  The standard measurements are in thousands of bytes (actually units of 1024 bytes) called KiloBytes (KB for short… and that MUST BE capitalized!!!!, otherwise, it’s bits, an EIGHT FOLD reduction in the amount you’re talking about!!!).  If it gets into the millions of bytes, it’s measured in MegaBytes (MB, again, uppercase).  1MB is 1024 KB or 1024*1024 bytes.  If it gets into the billions, it’s measured in GigaBytes (GB, again, uppercase).  1 GB is 1024 MB or 1024*1024*1024 bytes.  Next up from that is TerraBytes (trillions of bytes).  We’re still a few years out from having PCs with TBs of memory.  We do already have TB hard drives, but that’s not “memory”.  A modern PC as of this writing has typically between 2 – 6 GB of memory.  Few PCs have the capacity for more than 16GB at this time.  This, of course, as always, will be changing as the future unfolds.

These bits of data held in the computer’s memory can represent anything from a program (like notepad or your web browser) to photos of your cat, to videos of your kids, to spreadsheets or anything else you use on your computer.

It’s time for acronyms:

RAM:  Random Access Memory.  This is memory that can be both read from and written to.  Your computer’s memory is RAM.

ROM:  Read Only Memory.  It’s like RAM, but it can’t be changed.  An example of ROM is the program for a game in an arcade machine.  Take Pac-Man for example.  The program code is written to a ROM chip, so it can’t be changed.  That chip is installed in an arcade machine so each time it’s powered on, it loads the same program into RAM.  It never gets corrupted.

DRAMDynamic RAM.  A type of RAM that needs to be continually refreshed with electrical charge so that it doesn’t lose its data rapidly.

SRAM:  Static RAM.  This is RAM that doesn’t need to be recharged frequently.  It can retain its data without a continuous electrical source.

SDRAMSynchronous DRAM.  This type of memory waits for a clock signal before responding to requests from the CPU for data.  This keeps things synchronized for certain purposes.

Mis-uses.  Please, for the love of all that is good in the world, NEVER say “RAM Memory”!!!  As you now know, the “M” in RAM stands for “Memory”.  Saying “RAM Memory” is the equivalent of saying “Random Access Memory Memory”.  It’s like fingernails down a chalk board to the tech-educated.  Just saying “RAM” is more than sufficient.

Source Code

When a software developer writes software, he writes instructions and saves them in a text file.  The computer can’t actually run the instructions in the text file as-is.  It must first be converted to machine code before it can run.  Machine code is not human readable and is excruciatingly difficult for a human to work with directly.  Therefore, more human readable languages have been created that people can actually work with.  Unfortunately, machines don’t understand those languages.  Which is why it must be converted (or compiled) into machine code.  The text file and the instructions in it that the human wrote are what is called “Source Code”.

Once a program is converted to machine code (compiled), the newly created files that are what are distributed to the users of the software.  The source code is highly protected by the creators of the software, in the majority of cases.  For example, the source code to the Microsoft Windows Operating System is highly protected at Microsoft.  They go through extraordinary lengths to prevent their source code from making it out of the building.  Source code is protected because if a company’s competitors get a hold of it, they can put forth just a small amount of effort to remove the original company’s copyright notices and anything identifying the software as being created by the original owners, then modify the UI just enough to make it look like their own creation.  They’d let the first company spend all their resources to architect, build, and test the software, then they could steal it.

Some organizations aren’t as worried and freely give out the source code to their software.  In fact, there’s a huge movement called “open source” where people and organizations design and write software with the full intentions of giving away the software for the purposes of sharing.

Assembly Language

Code

In computer programming, there are hundreds of languages to choose from to write software, but when it comes down to it, they all get converted down to machine code eventually, even high level scripting languages like JavaScript.

Assembly language is the human readable form of machine code.  People write Assembly Language programs using a source code editor (a glorified text editor, optimized for writing computer code).  They then feed their source code into a program called an assembler to convert it into true machine code so that the CPU can execute it.

Assembly Language should not be confused with Assembler Language, though even some of the most experienced developers make the mistake.

For more information about Assembly Language, please read this article about Machine Code.

Machine Language or Machine Code

 

Thank you for sharing this article.  See this image?

image

You’ll find actual working versions of them at the top and bottom of this article. Please click the appropriate buttons in it to let your friends know about this article.

Machine Language or Machine Code

image Machine Language is the only language that can truly be run by a computer.  ALL other languages, no matter what the languages, are eventually converted down to machine code that’s specific to the CPU of the machine the software is running on.  At this level, the differences between CPU models becomes critical.

CPUs are the brains and the work horses of computers.  The CPU is the part that actually takes action on each of the instructions in a computer program.  The instructions that CPUs can execute are actually quite simple, regardless of the complexity of the source code and the language of the software that the original developer(s) designed.  Most CPUs have the following parts:

  • Program Counter (keeps track of what instruction is being executed).
  • Stack Pointer (keeps track of where the program was when it jumped to another section of code).
  • Stack (holds all the previous left code locations from prior jumps)
  • Registers (hardware variables to hold temporary values for quick calculations).  There are various types of registers depending on the processor.

In addition to those parts, the CPU also has direct access to the system’s memory.

The things that a CPU can actually do:

  • Copy data from one memory location to another (usually one one or a handful of bytes at a time).
  • Read bits, bytes, and words from any memory location or from registers.
  • Store bits, bytes, and words from any memory location to to registers.
  • Shift bits in a byte, or word in a register or a memory location.
  • Compare values between memory and/or registers and switch code execution to different parts in memory based on the result.
  • Add, subtract, increment, or decrement values in memory or registers.
  • Switch to another memory location for code execution.
  • Return from a particular section of code execution to a previous section that was executing.
  • push and pop values to and from the stack.

Depending on the processor, some also have the following capabilities:

  • perform floating point math.
  • automatically jump to a new memory location for code execution because of an external event (called an interrupt) such as keyboard input, mouse input, etc…

Various processors have varying capabilities, but the previous list contains the basic capabilities of the majority of processors.

Multi-Threading:

Multi-Core Processor Up until about 2002 or so, most PC processors (by PC, I mean “Personal Computer”, not “Windows” and since most people consider Macs more “personal” than others, “PC” definitely includes Mac as well)… anyway, up until around 2002 or so, most PC processors were capable of executing only one instruction at a time.  When that instruction completes, the next instruction would be executed, and so on, ad infinitum until the power is turned off.

In 2002, Intel introduced a partial multi threading technology called “Hyper Threading”.  This allowed one processor to appear as two processors to the operating system software.  By enabling this feature, the operating system could assign different processes to the two virtual processors, making it appear that two processes were running simultaneously.  Hyperthreading wasn’t pure or true parallel processing because it didn’t have two complete cores.  The processor looked at code that was about to execute and decided whether or not it could handle that portion of 2 threads simultaneously, and if so, it would.  Code that was optimized for multiple processors did see a measurable increase when this technology was enabled, but certainly no where near double speed as you’d expect with true parallelism.  In fact, code that was not optimized for it would actually run slower in some situations.

Some time around 2006, Intel started making processors with 2 actual complete processing units (called multi-core CPUs).  These were true multi-processing units and could execute 2 things at once all the time.  Since then both Intel and AMD have released faster multi-core processors and have been increasing the amount of “cores” in them.  As of this writing, you can get one with as many as 6 cores and an 8 core processor is just around the corner.

image The way a processor executes machine code is when it powers on, it begins looking at a hard wired memory address for instructions.  Each processor type has a different place in memory to look for the first instruction.  Regardless of where it is, it looks at the byte value in that memory location.  Each byte value means something different to the processor.  One byte value could mean to load a register with a value at a memory address specified in the next few bytes, another byte value may mean to jump to another memory location to find the next instruction, etc…  Each type of processor has its own set of instructions that’s unique to that processor, so machine code written for one type of processor cannot be executed on another type.  For example, the old 8-bit Apple // computers used a Motorola 6502 8 bit processor.  That processor does not have the circuitry to execute machine byte code the same way an Intel x86 compatible processor does and vice versa.  This is why, up until recently, you couldn’t take a program written for a Windows machine and run it on a Mac and vice versa, because the two machines used completely different hardware (different types of CPUs that couldn’t understand each other’s machine code).  Of course, around 2007 or so, Apple switched from the PowerPC processors to Intel processors.  Now, Macs can run Windows programs natively (without any kind of run time translation).

image The set of instructions that a particular type of processor can execute is called that processor’s “Instruction Set”.  This is one of the primary things that makes a processor what it is.  Whenever a programmer writes a program in a high level language (Machine code is called a low level language) like C, C++, C#, JavaScript, or whatever, the code is eventually converted down into machine code for the processor of the machine it run on.  There are NO exceptions to this.  ALL code is eventually converted to machine code.

Few people (if any) write code directly in machine code.  This image would entail writing actual bytes, not human readable instructions.  The lowest level people general write code in is Assembly Language, which is just 1/2 a step above Machine Code.  Assembly language is still one machine instruction at a time, but instead of writing actual bytes, the programmer writes in pneumonics.  In other words, there’s an English like word for each machine instruction available and the programmer writes those English like words.  The programmer saves this source code into a text file, then executes a program called an assembler than then converts the instructions in the text file directly into machine code, which can then be executed by the processor.

A note about terminology:

Machine Code or Machine Language:  These are the bytes that make up the instructions for a CPU to process.

Assembly Language:  This is a programming language.  Each processor with a different instruction set requires a completely different Assembly language with instructions (or pneumonics) specific to that processor.

Assembler:

  1. A program that converts an assembly language source file into machine code.  In a higher level language, we have programs called compilers that convert the high level language into machine code.  An assembler is different because it’s not interpreting high level concepts to form large machine code equivalents.  Instead, it just takes the instructions, one at a time, and converts each into one actual machine code instruction.  It’s a different and simpler process, so it is not called a compiler and is just an assembler.
  2. A language, not to be confused with Assembly!!!!  This language is NOT Assembly Language!  Though, the majority of programmers incorrectly use the terms “assembly” and “assembler” interchangeably.  Assembler is a language that’s specific to a particular vendor’s assembler.  For example, when you’re writing an Assembly Language program, you may need to leave an instruction in the source code that tells the Assembler (the program that converts your source code) how to convert some instructions.  You may want the assembler to generate 64 bit versions of your instructions rather than 32 bit versions.  This instruction left in the source code is NOT a machine instruction.  It’s an instruction for the Assembler that only has meaning during assembly time.  It does NOT get translated into a machine code.  Since this is a special instruction for the Assembler program, the language for those types of NON-MACHINE instructions are called Assembler Language.  As you can see, this is quite different from Assembly Language, which is an actual instruction for the hardware CPU.

Ad Infinitum

Latin, meaning: To go on forever (infinity).  Pronounced “odd in-fi (as in fist) ny (as in knife), toom”.  The incorrect and more common mispronunciation by English speakers is “add in-fi (as in fist) ni (as in knit) tem”.