Memory Management in OS(Operating System)


Memory Management

The way computers are built, the memory is arranged in a hierarchal way.  It starts with the fastest registers, the CPU cache, random access memory, and disk storage.  An operating system's memory manager coordinates the use of these various memory types by tracking which one is available, which one should be allocated or de-allocated and how to move data between them.

This function is referred to as virtual memory management and increases the amount of memory available for each process by making the disk storage seem like main memory.  There is a speed penalty associated with using disks or other slower storage as memory.  If running processes requires significantly more RAM than is available, the system may start thrashing or slowing down.

This can happen either because one process requires a large amount of RAM or because two or more processes compete for a larger amount of memory than is available.  This then leads to constant transfer of each process's data to slower storage.

Another important part of memory management is managing virtual addresses.  If multiple processes are in the memory at the same time, they must be stopped from interfering with each other's memory unless there is an explicit request to utilize shared memory.  This is achieved by having separate address spaces. 

Each process sees the whole virtual address space, typically from address 0 up to the maximum size of virtual memory as uniquely assigned to it.  The operating system maintains a page tables that matches virtual addresses to physical addresses.  These memory allocations are tracked so that when a process ends, all memory used by that process can be made available for other processes.

The operating system can also write inactive memory pages to secondary storage.  This process is called paging or swapping.  The terminology varies between operating system.

It is also typical for operating systems to employ otherwise unused physical memory as a page cache.  The page cache contains requests data from a slower device and can be retained in memory to improve performance.  The OS can also pre-load the in-memory cache with data that may be requested by the user in the near future.

The first task of memory management requires the operating system to set up memory boundaries for types of software and for individual applications.

As an example, let's look at an imaginary small system with 1 megabyte (1,000 kilobytes) of RAM.  During the boot process, the operating system of our imaginary computer is designed to go to the top of available memory and then "back up" far enough to meet the needs of the operating system itself.

          Let's say that the operating system needs 300 kilobytes to run. Now, the operating system goes to the bottom of the pool of RAM and starts building up with the various driver software required to control the hardware subsystems of the computer. In our imaginary computer, the drivers take up 200 kilobytes. So after getting the operating system completely loaded, there are 500 kilobytes remaining for application processes.

  When applications begin to be loaded into memory, they are loaded in block sizes determined by the operating system. If the block size is 2 kilobytes, then every process that is loaded will be given a chunk of memory that is a multiple of 2 kilobytes in size. Applications will be loaded in these fixed block sizes, with the blocks starting and ending on boundaries established by words of 4 or 8 bytes.

          These blocks and boundaries help to ensure that applications won't be loaded on top of one another's space by a poorly calculated bit or two. With that ensured, the larger question is what to do when the 500-kilobyte application space is filled.

In most computers, it's possible to add memory beyond the original capacity. For example, you might expand RAM from 1 to 2 megabytes. This works fine, but tends to be relatively expensive. It also ignores a fundamental fact of computing -- most of the information that an application stores in memory is not being used at any given moment.

A processor can only access memory one location at a time, so the vast majority of RAM is unused at any moment. Since disk space is cheap compared to RAM, then moving information in RAM to hard disk can greatly expand RAM space at no cost. This technique is called virtual memory management.

Disk storage is only one of the memory types that must be managed by the operating system, and is the slowest. Ranked in order of speed, the types of memory in a computer system are:
  • High-speed cache - This is fast, relatively small amounts of memory that are available to the CPU through the fastest connections. Cache controllers predict which pieces of data the CPU will need next and pull it from main memory into high-speed cache to speed up system performance.

  • Main memory - This is the RAM that you see measured in megabytes when you buy a computer.
  • Secondary memory - This is most often some sort of rotating magnetic storage that keeps applications and data available to be used, and serves as virtual RAM under the control of the operating system.

The operating system must balance the needs of the various processes with the availability of the different types of memory, moving data in blocks (called pages) between available memory as the schedule of processes dictates.

Disk and File Systems

All operating systems include support for a variety of file systems.  Modern file systems are made up of directories.  While the idea is similar in concept across all general purpose file systems, some differences in implementation exist.

Two examples of this are the character that is used to separate directories and case sensitivity.  By default, Microsoft Windows separates its path components with a backslash and its file names are not case sensitive. 

However, UNIX and Linux derived operating systems along with Mac OS use the forward slash and their file names are generally case sensitive.  Some versions of Mac OS (those prior to OS X) use a color for a path separator.

File systems are either journaled or non-journaled.  A journaled file system is a safer alternative in the event of a system crash.  If a system comes to an abrupt stop in a crash scenario, the non-journaled system will need to be examined by the system check utilities.  On the other hand, a journaled file systems recovery is automatic.

The file systems vary between operating systems, but common to all these is support for file systems typically found on removable media like CDs, DVDs, and floppy disks.  They also provide for the rewriting of CDs and DVDs as storage mediums.

Thanks for reading... 

We care about your time, that's why we only provide you some best stuff. Here you can get everything related to tech.


Explore More