I decided to start from the beginning, and here it goes my first undergrad chapter:
In the beginning, the earth was without form and void, and darkness was upon the face of the deep [Torah]; many, many years later; the sons of the sons of the sons......of Noah said; let it be a CPU and there was a Central processing Unit, or simply processor, whose job was to process data and to interpret computer programs instructions, hence an important difference between processes and programs: were programs are the set of instructions which we call a process once they are being processed by the processor (makes sense doesn't it?), in other words a process is a program under execution. And Noah's grand grand... children saw that the processor was good, but for this processor to be of some real use; it had to interact with some computer resources. And as systems went larger and complex, managing computer resources and processing interactions became a development complicated issue. Thus, operating systems (OSs) were born as a set of programs developed to provide standardized solutions for managing computer software and hardware.
Having operative systems and their services; a new world of concepts such as process and memory management, device drivers, disk and file systems, networking, etc. started to have a meaning and reason for existence.
As long as the computer is a Harvard or von Neumann architecture like the one the reader certainly has at home, only one process can run per CPU core at a time. As a solution for that; the time-sharing magic appeared at early 1960's, enabling concurrent execution of many processes at once per processor core, the trick behind it is called process management; and it is simply done by quick-switching processes, therefore distributing CPU time, which will be explained in deatail.
Having many concurrent executions dramatically increases memory management complexity, since it has to coordinate how memory is used by processes. Keep in mind that now days system's memory, is divided in various types, depending on its access speed. we have registers, CPU cache, random access memory (RAM) and at last: disk storage. The memory manager deals with those various types of memory by determining how to move data between them. To help with that task, virtual memory management was invented, there, memory is divided in virtual and physical addresses, having the virtual ones as those unique to processes, and real addresses as those unique to memory manager and CPU, that provides a separation between the physical memory and addresses used by programs. It is important to understand that in memory; programs consist mainly of two things: ``Text'' where program's running instructions are stored, and ``data'' where hardwired and volatile information such as string constants and variable values are stored, but as programs become a little bigger, the use of functions or subroutines is necessary, so there is a need for storing current process state before they call a function, consequently there must be some specific place to store such information, which is called the process's stack. For security reasons some operative systems split text, data and stack sections into what is called segments. Having that clear, when a program is about to be run, a virtual memory space for its text, data and stack information is created, that means that at process point of view, virtual memory its the only memory available, which is organized , but as process's need for more memory arises, data segment can change its size dynamically, by reallocating memory from unused memory areas called heap, that method is known as heap-based memory allocation.
It is common to hear people talking about 64-Bits now days, due to AMD's successful publicity, what this number is telling us is the largest number an AMD-64 can work with at a time, called word-size; which in that specific case is 64 bits. There are many implications from it, one important comes if we think about addressable addresses in memory, for example the common 32-Bit architectures have top of 2^32-1 bits = 4GB to work with. Therefore having a virtual address space from 0x00000000 to 0xffffffff, which is divided in peaces called pages. Same thing happens to physical memory but divisions are called frames. So a program segment can be located in virtual address A found by its page and offset numbers but it really corresponds to a physical address B, located by its frame and offset numbers, and what the virtual memory manager does is to translate virtual into physical addresses, with the use of a page table containing page-frame mappings, but address translation by software rutines is time consuming, and therefore reduces system's eficiency, so they decided to fasten this by storing most recent page-frame mappings into a very fast content addressable memory, known as the translation lookaside buffer (TLB), in which the search key is the virtual address and the search result it's corresponding real address. If no match ocured then a software address translation is trigged, improving virtual memory management speed.
to be continued.....
Friday, March 30, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment