Jumat, 14 Oktober 2011

Example In the Operating System


In this section we will discuss some examples in the use of virtual memory.Windows NTWindows NT implements virtual memory using demand paging with clustering. Clustering menanganani page fault by adding not only the affected page fault, but also some pages that are close pagetersebut. When the process first created, he was given the Working Set minimum guaranteed minimum number of pages that will be owned by that process in memory. If enough memory is available, the process can be given to as many as Working Set page maximum. Virtual memory manager maintains a list of free page frames. There is also a limit value associated with this list to indicate whether the available memory is sufficient. If the process has reached the maximum Working Set and its page fault occurs, then he must choose a replacement page by using local FIFO page replacement policy.When the amount of free memory falls below the limit value, the virtual memory manager uses a tactic known as the automatic working set trimming to restore the value of the above restrictions. It works by evaluating the number of pages allocated to the process. If the page has received an allocation greater than the minimum of its Working Set, the virtual memory manager will use the FIFO algorithm to reduce its number of page up to working-set minimum. If free memory is available, the process of working on the minimum working set can get an additional page.Solaris 2In the Solaris 2 operating system, if a process causes a page fault occurs, the kernel will give the page Such processes from the free page list stored. The result of this is, the kernel should keep a number of free memory. Against this list there are two parameters namely Saved minfree and lotsfree, namely the minimum and maximum limits of free memory available. Four times in every second, the kernel to check the amount of free memory. If that number falls below minfree, then a pageout process will be done, with the work as follows. The first clock will check all the pages in memory and sets the reference bit to 0. The next moment, the second clock will check references page in memory bits, and returns the bits are still set to 0 to the list of free memory. This is done until the amount of free memory exceeds lotsfree parameters. Furthermore, this process is dynamic, can adjust the speed if the memory is too little. If this process is unable to free memory, then the kernel to start the turn of the process to free pages allocated to these processes.
LinuxAs in solaris 2, linux also uses a variation of the clock algorithm. Thread of the linux kernel (kswapd) will be run periodically (or called when memory usage has gone too far). If the number of free pages fewer than the top page is free, then the thread will attempt to free three page. If fewer than the lower limit free page, the thread will attempt to free 6 page and 'sleep' for a few moments before walking again. When he runs, will examine mem_map, a list of all pages contained in the memory. Each page has a life byte that is initialized to 3. Each time the page is accessed, then this age will be added (up to a maximum of 20), each time kswapd check this page, then the age would be reduced. If the age of a page has reached 0 then she can be exchanged. When kswapd trying to free pages, he will release the first page from the cache, if it fails he will reduce the file system cache, and if all else has failed, then he will stop a process. Memory allocation on linux using two main allocations, the algorithm buddy and slab. For the buddy algorithm, each execution of the allocation routine is called, he checked the next memory block, if found he was allocated, otherwise the list will be checked the next level. If there are free blocks, it will be divided into two, which one is allocated and the others moved to the list below.

Structure of Computer SystemsThere is no specific provisions on how to structure a computer system. Every expert and computer architecture designers have their respective views. However, to facilitate us to understand the details of the operating system in the following chapters, we need to have general knowledge about the structure of computer systems.Computer Systems OperationsIn general, computer systems consisting of CPU and a device controller that is connected via a bus that provides access to memory. Generally, each device controller is responsible for a hardware spesisfik. Each device and the CPU can operate concurrently to gain access to memory. The existence of multiple hardware can cause synchronization problems. Therefore, to prevent a memory controller is added to synchronize access to memory.


      
General Computer Architecture
 
At a more advanced computer system, its architecture is more complex. To improve performance, use multiple buses. Each bus is a data path between several different devices. In this way the RAM, processor, GPU (AGP VGA) connected by the main high-speed bus, better known by the name of the FSB (Front Side Bus). While other devices connected by a slower bus speed lower bus connected with other, more quickly get to the main bus. For communication between the bus is used a bridge.Responsibility of the bus synchronization indirectly also affect memory synchronization is done by a bus controller or bus master is known as. The bus master will control the data flow through at one time, the bus only contains data from one device.In practice the bridge and the bus master is bound together in a chipset.




   
Modern Architecture PC
 
NB: GPU = Graphics Processing Unit; AGP = Accelerated Graphics Port; HDD = Hard Disk Drive; FDD = Floppy Disk Drive; FSB = Front Side Bus: USB = Universal Serial Bus; PCI = Peripheral Component Interconnect; RTC = Real Time Clock; PATA = Parallel Advanced Technology Attachment; SATA = Serial Advanced Technology Attachment; ISA = Industry Standard Architecture; IDE = Intelligent Drive Electronics / Integrated Drive Electronics; MCA = Micro Channel Architecture; PS / 2 = A port that IBM built to connect the mouse to the PC ;If the computer is turned on, which is known as booting, the computer will run the bootstrap program is a simple program that is stored in ROM as a chip CMOS (Complementary Metal Oxide Semiconductor). Modern CMOS chips usually type EEPROM (electrically erasable Programmable Read Only Memory), namely non-volatile memory (not lost if the power is turned off) which can be written and erased with an electronic pulse. Then bootsrap this program better known as the BIOS (Basic Input Output System).Bootstrap main program, which is usually located on the motherboard will examine the major hardware and hardware-initialization of the program in the hardware that is known by the name of the firmware.Bootstrap main program will then find and load the operating system kernel into memory and then proceed with the initialization of the system here operasi.Dari operating system program will wait for certain events. This event will determine what will be done next operating system (event-driven).This incident on a modern computer is usually marked by the emergence of software or hardware interrupt, so the Operating System is called Interrupt-driven. Interrupt from the hardware is usually delivered via a specific signal, while the software sends an interrupt by invoking a system call or also known as the monitor call. System / monitor this call will cause a special interrupt trap is generated by the software due to problems or requests to the operating system services. Trap is also often referred to as the exception.Each interrupt occurs, a set of code known as the ISR (Interrupt Service Routine) will determine the action to be taken. To determine what actions to take, it can be done in two ways: a poll that makes a computer to check one by one device is there to investigate the source of the interrupt and how to use the ISR address stored in the array is known as the interrupt vector in which the system will check Interrupt Vector each time an interrupt occurs.Interrupt architecture must be able to save the address in-interrupt instruction. On older computers, this address is stored in certain places that remain, while new padakomputer, the address stored in the stack together with state information at the time.The structure of I / OThere are two kinds of action if any I / O operations. Both kinds of actions are:After the I / O starts, control returns to user program when the I / O completion (Synchronous). Wait instruction causes the CPU is idle until the next interrupt. Wait loop will occur (to wait for the next access). At most one process I / O is running at a time.After the I / O starts, control returns to user program without waiting for the I / O completion (Asynchronous). System call requests on the operating system to allow the user to wait until the I / O selesai.Device-status table contains the data entered for each I / O device that describes the type, address, and the circumstances. Check the operating system I / O devices to know the state of the device and change the table to include interrupt. If the I / O device to send / retrieve data to / from memory this is known by the name (Direct Memory Access) DMA.




 
The structure of I / O
 
Direct Memory AccessUsed for I / O device that can move data at high speeds (close to the memory bus frequency). Device controller moves data in blocks from buffer storage directly to main memory or vice versa without processor intervention. Interrupt only occurs every block instead of each word or byte data. The whole process is controlled by a DMA controller named DMA Controller (DMAC). DMA controller sends or receives signals from the memory and I / O device. The processor sends only the data starting address, destination data, the length of data to the DMA Controller. . Interrupt the processor only occur during the transfer process is complete. The right to use the required memory bus DMA controller obtained with the help of a bus arbiter that the PC is now a Northbridge chipset.BusA data transfer paths that connect each device on the computer. There is only one device that can transmit data through a bus, but may be more than one device that reads the data bus. Consisting of two models: Synchronous buses where used but with the help of high-speed clock, but only for high-speed devices also; Asynchronous handshake bus used by the system but the low speed, can be used for various devices.


Storage StructureThe important thing to remember is that the program is part of the data.RegisterStorage several volatile data that will be processed directly in very high-speed processor. These registers are in the processor with a very limited number because of its function as a calculation / computation dataCache MemoryTemporary storage (volatile) small amounts of data to increase the speed of retrieval or storage of data in memory by a high-speed processor. Formerly stored outside the processor cache and can be added. For example pipeline burst cache normally found in computers early 90's. However, with decreasing die or wafer production costs and to improve performance, embedded in the processor cache. This memory is usually made based on the design of static memory.Random Access Memory (RAM) - Main MemoryPlace while a number of volatile data storage that can be accessed directly by the processor. Direct sense here means that the processor can find the address data in memory directly. Now, the RAM can be obtained with a fairly low view of performance that can even pass through the cache on an older computer.Memory ExtensionAdditional memory to be used to assist the processes in the computer, usually a buffer. Additional role of memory is often overlooked but very important for the efficiency. Usually additional memory gives a rough idea of ​​the ability of such devices, for example such as the amount of VGA memory, soundcard memory.Secondary StorageStorage medium is non-volatile data that can be either Flash Drive, Optical Discs, Magnetic Disk, Magnetic Tape. This media carrying capacity are usually quite large with a relatively cheap price. Portability is also relatively higher.







 
Disk Structure

 



Structure Optical Drive

 
Storage HierarchyBasic arrangement of storage systems are speed, cost, nature of volatility. Caching copying information into faster storage media; Main memory can be viewed as a last cache for secondary storage. Using high-speed memory to hold data that is accessed last. It takes a cache management policy. The cache also introduces another level in storage hierarchy. This requires data to be stored together in more than one level in order to remain consistent.





Status Process
As just discussed, for the program to be executed, processes, or tasks, created for that program. From the processor point of view, it executes the instruction of the repertoire in some order determined by the values ​​of the changes in the program counter register. Over time, the program counter can refer to the code in different programs that are part of a different process. From the standpoint of individual programs, which involve the execution sequence of instructions in that program. We can characterize the behavior of individual processes by listing a sequence of instructions that implement the process. Such a list called the trace process. We can characterize the behavior of the processor by showing how the traces of the various processes are inserted. We assume that the OS only allows the process to continue execution of instructions to a maximum of six cycles, after which disrupted; This prevents any single process from monopolizing the processor time, six of the first instruction of the process is run, followed by a time-out and execution of some code on the operator, who carry out the instructions before six.In Status Process there are two kinds of models of two-state process models and process models five statusTwo Process Model StatusThe process can be in one of two status- Walking (running) means the program is being carried out the orders and are executed and the exit- Not-running means that the program execution but not in the process of preparation to be executed
  
Five Process Model StatusStatus among the five process model is• New is the process of being worked on / created.• Running Instruction sednag is done.• Waiting is the process is waiting for some event to occur (such as a completion of I / O or acceptance of a sign / signal).• Ready is a process waiting to be assigned to a processor.• The process has been terminated is selsesai carry out their duties / execute.The state diagram view of the circumstances pertaining described in the figure below:
 
The image above shows the possibility of the transition process including the following:• Null-New: A new process is created to run an event occurs the program• New-Ready: the process will move to the new place is ready when it is ready to perform additional processing. Most systems set some limits on the number of existing processes or the amount of virtual memory committed to the process.• Ready - Running: When running the process time to select the OS to choose one of the processes in the ready state. This work is to schedule or dispatch.• Running-Exit: the ongoing process is terminated by the OS if the process shows that it has completed or if it aborts.• Running - Ready: The most common reasons for this transition is a process that has reached the maximum allowed for the uninterrupted implementation of almost all the time multiprogramming operating system determines the type of timely manner.• Running-Blocked: A process is blocked if the request is placed on something that should wait. The demand for OS is usually in the form of system service call is the call of the program to run a procedure that is part of the operating system code.• Blocked-Ready: A process block is moved to Ready when events have been waiting to happen.• Ready-Exit: For clarity, this transition will not be shown on the diagram. In some systems can process at any time the parent is also terminated if the parent terminates, all processes associated with the parent will be terminated.• Blocked - Exit: The following comments apply the previous item.


Critical Issues and Solutions Section Preemptive and Non Preemptive Scheduling
 
CPU scheduling may be executed when the process in a state:1. Changed from running to waiting state.2. Changed from running to ready state.3. Changed from waiting to ready state.4. Terminated.Preemptive scheduling has the meaning of the operating system's ability to suspend running processes to make room for higher-priority process. This scheduling may include scheduling process or M / K. Preemptive scheduling allows the system to better ensure that each process gets a time slice operation. And also make the system more quickly respond to external events (such as there is incoming data) that requires quick reaction of one or several processes. Making Preemptive scheduling system has the advantage of being more responsive than systems that use Non-Preemptive scheduling.In certain times, the process can be grouped into two categories: the process that has Burst M / K is a very long time called I / O Bound, and processes that have a very long CPU Burst called Bound CPU. Sometimes also a system has a condition called busywait, which is when where the system is waiting for input request (such as disk, keyboard, or network). When busywait, the process does not do something productive, but it takes resources from the CPU. With Preemptive scheduling, it can be avoided.In other words, Preemptive scheduling mechanism involves interruptions that interrupt the ongoing process and force the system to determine which processes are to be executed next.Scheduling the number 1 and 4 while the other is Non-Preemptive Preemptive. Scheduling commonly used operating systems today are typically Preemptive. Even some of the scheduling of operating system, eg Linux 2.6, has the capability of the system call Preemptive his (Preemptible kernel). Windows 95, Windows XP, Linux, Unix, AmigaOS, MacOS X, and Windows NT are some examples of operating systems that implement Preemptive scheduling.The length of time allowed for a process executed in the so-called Preemptive scheduling time slice / quantum. Scheduling runs every one unit of time slice to choose which process will run next. When the time slice is too short then the scheduler will take too much processing time, but when the time slice terlau long it allows the process to not be able to respond to events from the outside as fast as expected.Non-Preemptive SchedulingNon-Preemptive Scheduling is one type of scheduling where the operating system is never context switch to the running process to another process. In other words, processes that are running can not be interrupt.Non-Preemptive scheduling process only occurs when:1. Walking from running state to waiting state.2. Terminated.This means the CPU to keep the process until the process was moved to a waiting state or stopped (the process is not disturbed). This method is used by Microsoft Windows 3.1 and Macintosh. This is a method that can be used for specific hardware platforms, because it requires no special hardware (eg a timer that is used to interrupt the Preemptive scheduling method).Critical Section In Solution SoftwareBy using the algorithm-alogoritma whose truth value does not depend on other assumptions, except that each process runs at a speed that is not zero. The algorithm follows:Algorithm IThe algorithm I tried to address the critical section problem for two processes. The algorithm is applied to the rotating system of two processes that want to execute the critical section, so that both processes have to take turns using a critical section.Figure 1 Algorithm I



This algorithm uses a variable called the turn, the turn determine which processes are allowed to enter the critical section and access of data sharing. At first the variable turn initialized 0, P0 means to access the critical section. If turn = 0 and P0 want to use a critical section, then he can access the critical section. When finished executing its critical section, P0 will transform into a turn, which means turn P1 and P1 arrives allowed to access the critical section. When the turn = 1 and P0 want to use a critical section, then P0 P1 must wait until completion using a critical section and change the turn to 0.When a process is waiting, the process is entered into the loop, in which he must constantly check the variables turn up turned into a turn. This process is called busy waiting waiting. Actually busy waiting should be avoided because this process uses the CPU. But for this case, the use of busy waiting is allowed because it usually only takes place in the process of waiting for a short time.In this algorithm the problem arises when there is a process whose turn it enters its critical section but do not use a turn while the others want to access the critical section. Suppose that when turn = 1 and P1 does not use a turn, then turn and remained unchanged 1. Then P0 want to use a critical section, then he should wait until P1 using a critical section and change the turn to 0. This condition does not qualify as P0 progress can not enter the critical section and it was no use critical section, and he must wait for P1 to execute non-critical section to re-enter the critical section. This condition is also not eligible for if bounded waiting in turn to access the critical section P1 but P1 is finished executing all the code and terminate, then there is no guarantee can access the critical section P0 and P0-had to wait forever.Algorithm IIAlgorithm II also tried to solve the critical section problem for two processes. The algorithm is to anticipate problems that arise in the algorithm by changing the use of variable I turn to the flag variable. Store the flag variable process conditions which may enter the critical section. Processes that require access to the critical section will give its flag value to true. While the process does not require critical section lets you set the value flagnya false.Figure 2 Algorithm II
 
A process is allowed to access the critical section if another process does not need a critical section or another process is false flags. But if other processes require critical section (indicated by its flag value true), then the process must wait and "let" another process using a critical section. Here we can see that before entering the critical section a process of seeing another process in advance (through its flag), if another process requires a critical section or not.Initially diinisialisai flags to both processes is false, which means that both processes do not need a critical section. If P0 want to access the critical section, it will change the flag [0] to be true. Then check if P1 P0 will also need a critical section, if flags [1] is false then P0 will use a critical section. But if flags [1] is true then it must wait for P1 P0 using the critical section and change the flag [1] to be false.In this algorithm the problem arises when the two processes simultaneously want a critical section, both processes would set each of its flag to true. Flags set P0 [0] = true, P1 set flag [1] = true. Then check if P1 P0 will require critical section. P0 will see that flag [1] = true, then it will wait until P1 P0 finished using the critical section. But at the same time, P1 will also check whether the P0 requires critical section or not, he will see that flag [0] = true, then it will wait P0 P1 also completed using a critical section. This condition causes the second process that requires a critical section will be waiting for each other and "mutual invite" another process to access the critical section, consequently there is not even accessing the critical section. This condition indicates that Algorithm II is not eligible progress and bounded waiting requirement, since this condition will continue to survive and the second process must wait forever to be able to access the critical section.Algorithm IIIAlgorithm III was found by G.L. Petterson in 1981 and is known also as Algorithm Petterson. Petterson found a simple way to set the process in order to meet the mutual exclusion. This algorithm is the solution to solve the critical section problem in the two processes. The idea of ​​this algorithm is to combine variables that are sharing in Algorithm I and Algorithm II, which is variable and variable turn flag. Same as in Algorithm I and II, the variable turn indicates a turn which processes are allowed to enter the critical section and the variable flag indicates whether a process requires access to a critical section or not.Figure 3 Algorithm III
 
Initially diinisialisai flags to both processes is false, which means that both processes do not require access to the critical section. Then if a process wants to enter the critical section, it will change its flag to true (to give a sign that he needed a critical section) and then the process is to give a turn to his opponents. If the opponent does not want a critical section (its false flag), then the process can use a critical section, and after he finished using the critical section will change its flag to false. But if the opponent also wants the critical section the process opponent who can enter the critical section, and the process must wait until the opponent process of completing its critical section and change its flag to false.Suppose that when P0 requires a critical section, then it will change the flag P0 [0] = true, then turn alter P0 = 1. If P1 has a flag [1] = false, (regardless of the turn) then P0 can access the critical section. However, if P1 also requires a critical section, because the flag [1] = true and turn = 1, then P1 can enter the critical section must wait until the P0 and P1 completed the critical section and change the flag [1] = false, then the P0 can access the critical section.What if both processes require critical section simultaneously? Which processes can access the critical section first? If both processes (P0 and P1) came together, both processes will set each flag to true (flag [0] = true and flag [1] = true), in these conditions can alter turn P0 and P1 = 1 can also be change turn = 0. Process that can access the critical section first is a process that first change to turn his opponent's turn. Suppose first change turn P0 = 1, then P1 will change the turn = 0, since the last turn is 0 then P0 was the one who can access the critical section first and P1 must wait.Algorithm III meet the three conditions required. Progress and bounded waiting requirement is not met in Algorithm I and II can be met by this algorithm because when there is a process that wants to access the critical section and no one uses critical section then certainly there is a process that could use a critical section, and the process does not need to wait forever to be able to enter the critical section.Algorithms Junior BreadTailor Bread algorithm is the solution to the critical section problem in n-pieces of the process. This algorithm is also known as Lamport's Baker's algorithm. The idea of ​​this algorithm is to use the scheduling principles like the ones in place of bread sales. The customers who want to buy bread before should take the first order number and order of those who may purchase is determined by the serial number of each is to the customer.The principle of this algorithm to determine the process to access the same critical section as illustrated above baker. Process likened to a customer whose number n-pieces and each process that requires critical section is numbered determine which processes are allowed to enter the critical section. The numbers given are sequential or ordered, but as well as the serial number in the baker, there is no guarantee that each process gets a different sequence number. To resolve this use another parameter that is the process ID. Since each process has a unique process ID and sorted it can be ascertained only one process can access the critical section at a time. Process that can access the critical section first is a process that has the smallest sequence number. If there are several processes that have the same number then the process that has the smallest ID number that will be served first.Conclusion:Critical section solution must meet the following three conditions:1. Mutual Exclusion2. Progress3. Bounded WaitingAlgorithms I and II proved unable to solve the critical section problem for two processes because no qualified progress and bounded waiting. Algorithm that can solve the critical section problem in the two processes is the Algorithm III. As for the critical section problem in the n-process the fruit can be solved by using Algorithm Junior BreadCritical Solutions Section on HardwareDepending on some specific machine instructions, for example with her disabled interrupts or by locking a particular variable.

Tidak ada komentar:

Posting Komentar