Jumat, 14 Oktober 2011

Top 5 People Hacker fear in the World


Computer World is a Tools that can not be categorized as a strange but true thing, which everyone in the world can connect with each other, even World Computers are now used as a tool that must be owned by a company even almost all the work now can be dealt with using computers, and almost all systems controlled by computer technology ..Issues related systems in the wake of the computer, every person who has always wanted to try to understand the computer find the weaknesses of the system that created .. or commonly known as hacking ...Here are some hacker will respect him the most in the World ...1.yunus attsaouly aka irhaby 007
 
Who Irhabi 007? What has he done so his name became a legend? Why did he do all the things that can be said is full of risk? Since when identity is known in general?At first, the name Irhabi 007 is only known in cyberspace. Irhaby means the terrorists and the 007 is well-known figure for the British royal secret agent (the movie) named James Bond. But Irhabi 007 which is not defending the Queen (UK) but rather fight it. Irhabi 007 is known as one who is very active in cyberspace, especially in the cyber jihad. Irhabi 007 can be said to be always in the condition on line, 24 hours straight. Irhabi 007 is known as a sympathizer of Al Qaeda who is also a computer expert and the cyber world. Every day Irhabi 007 with Internet activities, such as to convert videos, including videos of jihad that can be displayed on the site. It was obvious that the Irhabi 007 is a lot of mastering information technology issues. One of the most prominent activity is to create a site with a name and became an administrator Forum youbomit Al Ansar Al Islami is very prestigious because it involves approximately 4500 members who are mostly Mujahideen.No one knows exactly when the name Irhabi 007 poor across the universe began Cyber ​​Jihad. The appearance of his name was not immediately able to confirm where its existence. It is also one of skill Irhabi 007. Only, the activity could be detected Irhabi 007 began in 2001. 007 Irhabi activity increased along with the start of the American invasion of Iraq, in 2003. At that time, Irhabi 007 began actively uploading pictures of Iraq war on the internet. In the same year he began to publish the materials, including how to hack computers. In fact he has also written his method into a book. Activists and the Mujahideen are active in Islamic forums began to know and be amazed at the skill and courage Irhabi 007. The enemy, America and its allies had begun to realize the existence of Irhabi 007, which from the name alone is enough aggravates them. So the hunt began to Irhabi 007!2. Kevin Mitnick (born August 6, 1963)
 
Kevin is known for a pretty horrendous action hackernya America, and is "the most wanted computer criminal in United States history". Her story had been filmed as much as 2 times in Hollywood, with the title 'Takedown' and 'Freedom Downtime'.Kevin started hackernya first action against the transportation system in Los Angeles. After breaking into the system 'punchcard' (readings subscription card bus), she can wear a bus anywhere for free. Kevin's next action is to break the telephone system, in which he can use long distance phone service for free.After identifying a computer, Kevin was hacking at:- DEC (Digital Equipment Corporation) system- IBM Mini Computer in 'Computer Learning Center of Los Angeles'- System Hacking Motorola, NEC, Nokia, Sun Microsystems and Fujitsu Siemens- Fool the FBIKevin finally got caught and serving a prison sentence of 5 years. He was only released in 2000. Once free, he was not allowed to use telecommunication devices and phones until 2003. After Kevin's right to sue in court, he finally allowed to use the communication tools and computers. Kevin is currently working as a computer security consultant.3. Adrian Lamo (born 1981)
 
Adrian is a journalist and a hacker predicated 'gray' (can be good, be evil), is mainly known in the hacking of computer networks that have a series of high security.Himself has become a popular man after breaking into computer systems 'The New York Times' in 2002 and 'Microsoft'. He is also known to be able to identify security flaws in computer networks of companies included in the list of 'Fortune 500 companies' and then tell them the weaknesses and gaps that he found.This case was investigated by the FBI for 15 months, after the New York Times reported the existence of their systems are hacked. It was found in 2003 that the cause is Adrian. Adrian was hiding a few days, and finally surrendered to the FBI in 2004.Adrian finally had to undergo punishment 'house arrest' by her parents, and 2 years of probation with fines of about $ 65,000. Adrian is also believed to be trying to break through Yahoo! 's computer systems, Sun Microsystems, Bank of America and CitiGroup, by leveraging existing security gaps.4.Jonathan James (born December 12, 1983)
 
James is the youngest Americans convicted of cyber crime. Barely 16 years old, he was sent to prison for hacking U.S. defense department website.He confessed to hacking is a challenge and is one of pleasure. NASA also got fruit sap of keisengannya, James managed to steal (download) software that NASA estimated to be worth $ 1.7 million U.S. dollars in 1999. NASA should be forced to shut down the server and the system as a result of the actions of the James for three weeks. And after that NASA should mengelaurkan sekiatar cost $ 41,000 to fix the system is uprooted.6 months after James hacking NASA, he was arrested at his home by local police at 6 am. He is serving a prison sentence of 6 months because it is still a minor, and probation 'house arrest' until its age reaches 21 years. James was not allowed to interact with the computer that long.James died on May 18, 2008, no information what caused his death.







Christopher Martin (born 10 March 1993)
 
Martin Christopher is the citizenship of Indonesia, known since he managed to steal data and managed to break the software system for Apollo (U.S.) he was arrested at his residence when he was remiss in fact the American secret police (FBI) has been investigated 24 months, crime in do include-Managed to steal the software system for Apollo (U.S.) is estimated at $ 2.8 million dollars by his actions because the APOLLO accept losses so large that the APOLLO should turn off the server.-Break data facebook, twiter, friendster, myspace and e-mail rule-DEC (Digital Equipment Corporation) system- IBM Computer Hard on 'Computer Learning Center of Los Angeles'- Do system hacking Motorola, NEC, Nokia, Sun Microsystems and Fujitsu Siemens, Sony Ericson, Blackberry, Blueberry, Alcatel, Acer Programs, Toshiba aplications, Yahoo!, Bank of America and CitiGroup, microsoft aplications, Poker Games and Point Blank- Fool the FBI, The Indonesian government-America-Europe-AfricaDue pebuatannya he brought the FBI into America for the law accordingly, successive intervals of time he got out of jail the FBI and now works as a professional programming in the largest embassy in the U.S.Even dizaman increasingly sophisticated as it may be expected there are millions of people who are always wrestling in the World Hacking, even they are now working in groups or incorporated in a community that is always trying to find the weaknesses of a system that in the Create, with different goals depending on their wishes ....

CPU scheduling


CPU scheduling is the basis of multi-programming operating system. With the switch the CPU among processes. As a result, the operating system can make computers productive. In this chapter we will introduce the basic concepts of scheduling and some scheduling algorithm. And we also presented a problem in choosing an algorithm in a system.Basic ConceptsThe purpose of the multi-programming is to have processes running simultaneously, to maximize the performance of the CPU. For uniprosesor system, there never was a process running more than one. If there is a process that is more than one then the others have to wait until the cpu is free.The idea of ​​multi porgamming very simple. When an executable process that others must wait until the finish. In a simple computer system CPU will be much in the position very terbunag idle.Semua this time. With multiprogamming we try to use the time productively. Some processes are stored in memory at one time. When the process must menuggu. Mengmbil operating system cpu to process the process and leave the process of being executed.scheduling is the basic function of a system opersai. Almost all computer resources are scheduled before use. CPU is one of the sources of the essential computer that became central to central scheduling at the operating system.Burst Cycle CPU-I / OThe success of CPU scheduling depends on several properties processors. Contains a cycle of CPU execution process execution and I / o Wait. The process will only back and forth from these two states. Poses execution begins with the CPU Burst, after that dikikuti by I / O burst, and conducted in turns.The duration of this bust CPU ditelah measured extensively, although they are very different from the process into prose. They have frekeunsi same curve as shown below.
 
CPU schedulingWhenever the CPU becomes idle, the system opersai must select one process to enter the ready queue (ready) to executed. The selection is done by short-term scheduler. Scheduling choose from all these processes in memory that are ready to be executed, den allocates the CPU to executeCPU scheduling may be executed when the process:1. Changed from running to waiting state2. Changed from running to ready state3. Changed from waiting to ready4. TerminatesScheduling from No. 1 to 4 while the other non-preemptive preemptive. In nonpreemptive scheduling once the cpu has been allocated to a process, it can not be disturbed, such as scheduling model is used by Windows 3.x; windows 95 have used preemptive scheduling.DispatcherOther components involved in the scheduling of the CPU adalan dispatcher. Dispatcher is the module that gives control of the CPU to process the function is:1. Switching context2. Switching to user mode3. Jumping from one section in porgam user to repeat the program.Dispatchers should be as fast as possible,Criteria SchedulingDifferent CPU scheduling algorithms have different properties. In selecting the algorithm used for certain situations, we must think about the different properties for different algorithms. Many of the recommended criteria separately comparing CPU scheduling algorithms. Kritria you select a commonly used in are:1. CPU utilization: we want to keep the CPU as busy as possible. CPU utilization will have a range from 0 to 100 percent. In the actual system he should have served until the range from 40 percent 90 percent2. Throughput: if the cpu is busy executing the process, if such work has been carried out. One measure of work is a lot of processes completed per unit time, called throughput. For a long process that may be one process per hour; for the moment processes may be 10 processes per second.3. Turnaround time: sudur view of a particular process, an important criterion is how long to execute that process. The interval of time allowed by the time needed to complete a prose called turn around time. Trun around time is the number of periods to wait to get into memory, waiting in the ready queue, execution on the CPU, and perform I / O4. Waiting time: cpu scheduling algorithm does not affect the time to carry out the process or I / O; it only affects the amount of time required in the process ready queue. Waiting time is the number of periods spent in the ready queue.5. Response time: in an interactive system, turnaround time may not be the best time to criterion. Often a process can produce output at the beginning, and could continue while the new results that have previously been granted to the user. Another measure is the time of pengiriamn request until the first response given. This is called response time, ie time to begin to respond, but not time spent untu output response.Usually done is to maximize CPU utilization and throughput, and minimize turnaround time, waiting time, and response time in certain cases we take the average.Scheduling AlgorithmCPU scheduling deals with problems of deciding which one to dillaksanakan process, therefore a lot of different scheduling algorithms, in this section we will describe some algorithms.First Come, First ServedThis is the simplest algorithm, with a scheme that asks cpu process priority. Implementation of FCFS easily dealt with FIFO queue.eg the arrival sequence is P1, p2, p3 Gantt Chart for this are:ie the process is reversed so that the order of arrival is p3, p2, p1Of the two examples above that the second case is better than the first case, because the effect of the arrival besides that FCFS has a drawback that is convoy effect if there is a process where a small, but he lined up with a process that requires a long lead time will be a long process executedFCFS scheduling algorithm is nonpremptive. When the CPU has been allocated to a process, the process on hold until selssai cpu. FCFS algorithm is clearly a problem for time-sharing system, which is very important for users to share cpu at regular intervals. That would be disastrous for megizinkan one process on cpu for an unlimited timeShortest Job First SchedulingOne of the other algorithms are Shortest Job First. This algorithm is related to the time of each process. When the CPU is free process that has the shortest time to complete a priority. If two or more processes have the same time then the FCFS algorithm is used to solve the problem.
 
There are two schemes in this SJFS namely:1. nonpremptive - when the CPU gives the process can not be postponed until the completion2. preemptive - if a process comes with a lower procedural time compared to the time the process is being executed ooleh CPU time the process is a lower priority. This scheme is also called Short-Remaining Time First (SRTF)SJF algorithm is probably the most optimal, because it gives the average minimum set of processes waiting for the queue. With the shortest time to execute the new longest. As a result the average time mnenuggu decreased.It is difficult to SJF algorithm is mengethaui Waku of the next process. For long term scheduling (old) in a batch system, we can use the long time limit a user process mentioned when he sent the job. Therefore SJF scheduling is often used in long term.Although the optimal SJF but he can not be used for short term CPU scheduling. There is no way to know the length of next CPU burst. One way to implement it is to predict the next CPU burst.
 
Scheduling priorityPendawalan SJF (Shorthest Job First) is a special case for priority scheduling algorithm. Priorities can be associated each of these processes and the CPU is allocated to the process with highest priority. For the same priorities done with FCFS.The algorithm penjadwalam priorities are as follows:• Each process will have a priority (integer). Some systems use a sequence of integers with small for low-priority process, and other systems could also use a small integer sequences for high-priority process. But in this text it is assumed that a small integer is a top priority.• CPU is allocated to the process with highest priority (integer small is the highest priority).• In this algorithm there are two schemes namely:1. Preemptive: the process can be interrupted if there is a higher priority that requires the CPU.2. Nonpreemptive: a high priority process will replace the current usage of time-slice runs out.• SJF is an example of priority scheduling where priority is determined by the next time the CPU usage.The problems that arise in penjadwlan priority is indefinite blocking or starvation• Sometimes the case with lace priority may never be executedSolutions for priority scheduling algorithm is aging• Priority will go up if the process of growing old waiting for CPU time allocation.Round Robin SchedulingAlgorithm Round Robin (RR) is designed for time-sharing system. This algorithm is similar to FCFS scheduling, but preemption is added to switch between processes. Ready queue is treated or regarded as a circular queue. Menglilingi CPU ready queue and allocates each process for specified time intervals up to one time slice / quantum.Here algritma to Round Robin scheduling:• Each process gets a small CPU time (time slice / quantum) Time particular slice / quantum ntara usually 10-100 milliseconds.1. After the time slice / quantum, the process is in-preempt and transferred to the ready queue.2. This process is fair and is very simple.• If there are n processes in the "ready queue" and the time quantum q (milliseconds), then:1. Then each process gets 1 / n of the CPU time.2. No process waits more than (n-1) q time units.• Performance of this algorithm depends on the size of time quantum1. Time Quantum with a large size it will be the same as the FCFS2. Time Quantum with the small size of the time quantum should be resized larger with respect to context switch would otherwise require large expenses.






51
 
CPU scheduling
Highlights:ô€€¹ Basic Conceptsô€€¹ Criteria Schedulingô€€¹ Scheduling Algorithm
LEARNING OBJECTIVES:After studying the material in this chapter, students should be able to:ô€€¹ Understand the basic concepts of CPU schedulingô€€¹ Understand the criteria necessary for the CPU schedulingô€€¹ Understanding some CPU scheduling algorithm that consists of algorithmsFirst Come First Serve, Shortest Job First, Priority and Round Robin
4.1 BASIC CONCEPTSIn multiprogramming systems, always will be some process runningat a time. While on uniprogramming this will not happen, becausethere is only one process running at any given moment. Multiprogramming systemrequired to maximize the utility of the CPU.At the time the process is run and the CPU execution cycles waiting for I / Ocalled a cycle of CPU-I / O burst. The execution process begins with CPU burst andfollowed by I / O burst, followed by another CPU burst, then I / O burst another andso as in Figure 4-1.
53 CHAPTER 4 CPU SCHEDULINGWhen a process is executed, there are many short CPU burst andthere are few long CPU burst. Programs that are I / O bound is usually veryits short CPU burst, while the program is CPU bound CPU burst possibilityhis old partner. This can be illustrated with graphs of exponential orhyper exponential as in Figure 4-2. It is therefore very important electionCPU scheduling algorithms.
4.1.1 CPU SchedulerThe CPU time idle, the operating system must select prosesprosesin main memory (ready queue) for execution and allocatingCPU for one of the process. Such selection is called the shorttermscheduler (CPU scheduler). The decision to schedule the CPU follow the Empathe following circumstances:1. If the process of switching from running to waiting state;2. If the process of switching from running to ready state;3. If the process of moving from waiting to ready state;4. If the process stops.If the scheduling of the selected model using the state 1 and 4, thenpenjadwakan are called non-peemptive. Conversely, if usedis a state of 2 and 3, it is called with preemption.In non-preemptive, if a process is using the CPU, then the processwill still bring the CPU to process the release (stoppingor in a state of waiting). Preemptive scheduling has the disadvantage, namely the costrequired is very high. Among other things, the data should always be improvements. caseThis occurs if a process was abandoned and will soon do another process.
4.1.2 DispatcherDispatcher is a module that will give control of the CPUof the selection process undertaken during the short-term scheduling. Functions ofcontained in it include:1. Context switching;2. Switching to user-mode;

 
SCHEDULING CPU 543. Jump to a specific location on the user program to start the program.The time required by the dispatcher to stop one process andbegin to run another process called the dispatch latency.
CRITERIA SCHEDULINGDifferent CPU scheduling algorithms will have different properties.So to choose this algorithm should be considered first propertiesthe algorithm. There are several criteria used to performcomparing CPU scheduling algorithms, among others:1. CPU utilization. It is hoped that the CPU is always in a state of busy. Utility CPUexpressed in terms of 0-100% per cent. But in reality onlyranged between 40-90%.2. Throughput. Is the number of processes completed in a single unittime.3. Turnaround time. The amount of time needed to execute the process,from the start waiting to ask a place in main memory, waiting in the readyqueue, execution by the CPU, and do I / O.4. Waiting time. The time required by a process to wait in the readythe queue. Waiting time does not affect the execution process and the use of I / O.5. Response time. The time needed by a process of asking to be servedThe first response is to respond to these requests.6. Fairness. Ensure that each process will get a time divisionopenly CPU usage (fair).

 
SCHEDULING ALGORITHMCPU scheduling involves determining the processes that exist in the readyqueue to be allocated on the CPU. There are several scheduling algorithmsCPU as described in the section below.
SCHEDULING CPU

First-Come First-Served Scheduling (FCFS)The process was first asked to use the allotted time the CPU willserved first. In this scheme, the process of asking the CPU will firstallocated to the CPU first.For example there are three processes that can be in the order P1, P2, and P3 withCPU-burst time in milliseconds is given as follows:Process Burst TimeP1 24P2 3P3 3Gant Chart with FCFS scheduling is as follows:The waiting time is 0 for P1, P2 and P3 is 24 is 27 so that the averagewaiting time is (0 + 24 + 27) / 3 = 17 milliseconds. Whereas if the process comesin the order P2, P3, and P1, the CPU scheduling can be seen on the Gantt chartthe following:The waiting time is 6 now to P1, P2 and P3 is 0 is 3 so that the averagewaiting time is (6 + 0 + 3) / 3 = 3 milliseconds. Average waiting time of this casemuch better than the previous case. In the CPU schedulingmade possible when the Convoy effect short process that is in the processthat long.The algorithm includes a non-preemptive FCFS. because, once the CPU is allocatedon a process, then the process will still be using the CPU to processThe release, ie if the process is stopped or asked for I / O.P1 P2 P30 24 27 30P1 P2 P30 3 6 30


 
SCHEDULING CPU
  
Scheduler Shortest Job First (SJF)At SJF scheduling, a process that has served the smallest CPU burstfirst. There are two schemes:1. Non-preemptive, when the CPU is given to the process, it can not be postponeduntil the CPU burst is complete.2. Preemptive, if the new process comes with CPU burst length shorterof the remaining processing time while it is being executed, the process is delayed andreplaced with a new process. This scheme is called the Shortest-Remaining-Time-First (SRTF).SJF is optimal scheduling algorithm with the average waiting timeare minimal. For example there are four processes by CPU burst length inmilliseconds.Process Arrival Time Burst TimeP1 0.0 7P2 2.0 4P3 4.0 1P4 5.0 4Processes with the SJF scheduling algorithm (non-preemptive) can be seen on the Gantthe following chart:The waiting time for P1 is 0, P2 is 26, P3 and P4 is 3 is 7, soAverage waiting time is (0 + 6 + 3 + 7) / 4 = 4 milliseconds. While Schedulingprocess with SRTF algorithm (preemptive) can be seen on the Gantt chart below:The waiting time is 9 to P1, P2 is 1, P3 and P4 is 0 is 4, soAverage waiting time is (9 + 1 + 0 + 4) / 4 = 3 milliseconds.P1 P2 P30 3 7 16P48 12P1 P2 P30 2 4 11P45 7P1 P216
  
SCHEDULING CPUAlthough this algorithm is optimal, but in reality it is difficult toimplemented because it is difficult to know the length of next CPU burst.However, this value can be predicted. The next CPU burst is usually predictable asan exponential rate determined from previous CPU burst or"Exponential Average".τ α α τ n n t (1) 1 0 = + - + (4.1)with:τ n +1 = length of the expected CPU burstτ 0 = length of previous CPU burstτ n = length of CPU burst of the n (in progress)α = the size of the comparison between τ n +1 with τ n (0 to 1)Graph the results predicted CPU burst can be seen in Figure 4-3.For example, if α = 0.5, and:




 
SCHEDULING CPUCPU burst (Ï„ n) = 6 4 6 4 13 13 13. . .Ï„ n = 10 8 6 6 5 9 11 12. . .At first Ï„ 0 = 6 and Ï„ n = 10, so that:Ï„ 2 = 0.5 * 6 + (1 - 0.5) * 10 = 8Values ​​that can be used to find Ï„ 3Ï„ 3 = 0.5 * 4 + (1 - 0.5) * 8 = 6

 
Priority SchedulingSJF algorithm is a special case of priority scheduling. Tiaptiapprocess is equipped with a priority number (integer). CPU is allocated to processwhich has the highest priority (smallest integer value is usually the prioritythe largest). If multiple processes have the same priority, it will be usedFCFS algorithm. Priority scheduling scheme that consists of two non-preemptiveand preemptive. If there is a process that comes at P1 P0 is running, thenP1 will be the priority. If priority P1 is greater than the priorityP0, then the non-preemptive, fixed algorithm will solve P0 until they run outIts CPU burst, and put the head position P1 in the queue. While inpreemptive, P0 will be stopped first, and replace the CPU allocated to P1.For example there are five processes P1, P2, P3, P4 and P5 that comes forsequentially with CPU burst in milliseconds.Process Burst Time PriorityP1 10 3P2 1 1P3 2 3P4 1 4P5 5 2Process with priority scheduling algorithm can be seen on the Gantt chart below:
  
SCHEDULING CPUThe waiting time is 6 to P1, P2 is 0, P3 is 16, P4 and P5 is 18 is1 so that the average waiting time is (6 + 0 +16 + 18 + 1) / 5 = 8.2 milliseconds.

 
Round-Robin SchedulingThe basic concept of this algorithm is to use time-sharing. OnThis algorithm is essentially the same as FCFS, only to be preemptive. EachCPU time the process of getting called with a time quantum (quantum time)to limit the processing time, typically 100-100 milliseconds. After a timeout, the processdelayed and added to the ready queue.If a process's CPU burst is smaller than the timequantum, then the process will release the CPU if it has finished its work,so the CPU can be immediately used by the next process. Conversely, if aprocess has the CPU burst greater than the time quantum,then the process will be suspended if it reaches the time quantum,and then queue up again at the position of the tail of the ready queue, the CPU thenrun the next process.If there are n processes in ready queue and the time quantum is q, then everyprocess gets 1 / n of the CPU time at most q time units at onceCPU scheduling. No process waits more than (n-1) q time units.Round robin algorithm performance can be explained as follows, if q is large, thenalgorithm used is FIFO, but if q is small, the frequent contextthe switch.Suppose there are three processes: P1, P2, and P3 which require the CPU to service-time quantum of 4 milliseconds.Process Burst TimeP1 24P2 3P3 3Scheduling processes with round robin algorithm can be seen on the Gantt chart below:P1 P2 P50 1 6 16P318 19P4
  
SCHEDULING CPUP1 P2 P3 P1 P1 P1 P1 P10 4 7 10 14 18 22 26 30The waiting time is 6 to P1, P2 is 4, and P3 is 7 so that the average timewait is (6 + 4 + 7) / 3 = 5.66 milliseconds.Round-Robin algorithm is on one hand has the advantage, namely the existenceuniformity of time. But on the other hand, this algorithm will too often doswitching as shown in Figure 4-4. The greater the quantum-timenyaswitching that occurs will be less.Turnaround time also depends on the size of time quantum. As inFigure 4-5, the average turnaround time was not increased when the time quantum is increased.In general, the average turnaround time can be increased if multiple processescompleting the next CPU burst as a time quantum. As an example,There are three processes each 10 units of time and the time quantum of 1 unit of time,Average turnaround time is 29. If the time quantum 10, on the contrary, the averageturnaround time dropped to 20.Figure 4-4: Shows the smaller time quantum increasescontext switch

Example In the Operating System


In this section we will discuss some examples in the use of virtual memory.Windows NTWindows NT implements virtual memory using demand paging with clustering. Clustering menanganani page fault by adding not only the affected page fault, but also some pages that are close pagetersebut. When the process first created, he was given the Working Set minimum guaranteed minimum number of pages that will be owned by that process in memory. If enough memory is available, the process can be given to as many as Working Set page maximum. Virtual memory manager maintains a list of free page frames. There is also a limit value associated with this list to indicate whether the available memory is sufficient. If the process has reached the maximum Working Set and its page fault occurs, then he must choose a replacement page by using local FIFO page replacement policy.When the amount of free memory falls below the limit value, the virtual memory manager uses a tactic known as the automatic working set trimming to restore the value of the above restrictions. It works by evaluating the number of pages allocated to the process. If the page has received an allocation greater than the minimum of its Working Set, the virtual memory manager will use the FIFO algorithm to reduce its number of page up to working-set minimum. If free memory is available, the process of working on the minimum working set can get an additional page.Solaris 2In the Solaris 2 operating system, if a process causes a page fault occurs, the kernel will give the page Such processes from the free page list stored. The result of this is, the kernel should keep a number of free memory. Against this list there are two parameters namely Saved minfree and lotsfree, namely the minimum and maximum limits of free memory available. Four times in every second, the kernel to check the amount of free memory. If that number falls below minfree, then a pageout process will be done, with the work as follows. The first clock will check all the pages in memory and sets the reference bit to 0. The next moment, the second clock will check references page in memory bits, and returns the bits are still set to 0 to the list of free memory. This is done until the amount of free memory exceeds lotsfree parameters. Furthermore, this process is dynamic, can adjust the speed if the memory is too little. If this process is unable to free memory, then the kernel to start the turn of the process to free pages allocated to these processes.
LinuxAs in solaris 2, linux also uses a variation of the clock algorithm. Thread of the linux kernel (kswapd) will be run periodically (or called when memory usage has gone too far). If the number of free pages fewer than the top page is free, then the thread will attempt to free three page. If fewer than the lower limit free page, the thread will attempt to free 6 page and 'sleep' for a few moments before walking again. When he runs, will examine mem_map, a list of all pages contained in the memory. Each page has a life byte that is initialized to 3. Each time the page is accessed, then this age will be added (up to a maximum of 20), each time kswapd check this page, then the age would be reduced. If the age of a page has reached 0 then she can be exchanged. When kswapd trying to free pages, he will release the first page from the cache, if it fails he will reduce the file system cache, and if all else has failed, then he will stop a process. Memory allocation on linux using two main allocations, the algorithm buddy and slab. For the buddy algorithm, each execution of the allocation routine is called, he checked the next memory block, if found he was allocated, otherwise the list will be checked the next level. If there are free blocks, it will be divided into two, which one is allocated and the others moved to the list below.

Structure of Computer SystemsThere is no specific provisions on how to structure a computer system. Every expert and computer architecture designers have their respective views. However, to facilitate us to understand the details of the operating system in the following chapters, we need to have general knowledge about the structure of computer systems.Computer Systems OperationsIn general, computer systems consisting of CPU and a device controller that is connected via a bus that provides access to memory. Generally, each device controller is responsible for a hardware spesisfik. Each device and the CPU can operate concurrently to gain access to memory. The existence of multiple hardware can cause synchronization problems. Therefore, to prevent a memory controller is added to synchronize access to memory.


      
General Computer Architecture
 
At a more advanced computer system, its architecture is more complex. To improve performance, use multiple buses. Each bus is a data path between several different devices. In this way the RAM, processor, GPU (AGP VGA) connected by the main high-speed bus, better known by the name of the FSB (Front Side Bus). While other devices connected by a slower bus speed lower bus connected with other, more quickly get to the main bus. For communication between the bus is used a bridge.Responsibility of the bus synchronization indirectly also affect memory synchronization is done by a bus controller or bus master is known as. The bus master will control the data flow through at one time, the bus only contains data from one device.In practice the bridge and the bus master is bound together in a chipset.




   
Modern Architecture PC
 
NB: GPU = Graphics Processing Unit; AGP = Accelerated Graphics Port; HDD = Hard Disk Drive; FDD = Floppy Disk Drive; FSB = Front Side Bus: USB = Universal Serial Bus; PCI = Peripheral Component Interconnect; RTC = Real Time Clock; PATA = Parallel Advanced Technology Attachment; SATA = Serial Advanced Technology Attachment; ISA = Industry Standard Architecture; IDE = Intelligent Drive Electronics / Integrated Drive Electronics; MCA = Micro Channel Architecture; PS / 2 = A port that IBM built to connect the mouse to the PC ;If the computer is turned on, which is known as booting, the computer will run the bootstrap program is a simple program that is stored in ROM as a chip CMOS (Complementary Metal Oxide Semiconductor). Modern CMOS chips usually type EEPROM (electrically erasable Programmable Read Only Memory), namely non-volatile memory (not lost if the power is turned off) which can be written and erased with an electronic pulse. Then bootsrap this program better known as the BIOS (Basic Input Output System).Bootstrap main program, which is usually located on the motherboard will examine the major hardware and hardware-initialization of the program in the hardware that is known by the name of the firmware.Bootstrap main program will then find and load the operating system kernel into memory and then proceed with the initialization of the system here operasi.Dari operating system program will wait for certain events. This event will determine what will be done next operating system (event-driven).This incident on a modern computer is usually marked by the emergence of software or hardware interrupt, so the Operating System is called Interrupt-driven. Interrupt from the hardware is usually delivered via a specific signal, while the software sends an interrupt by invoking a system call or also known as the monitor call. System / monitor this call will cause a special interrupt trap is generated by the software due to problems or requests to the operating system services. Trap is also often referred to as the exception.Each interrupt occurs, a set of code known as the ISR (Interrupt Service Routine) will determine the action to be taken. To determine what actions to take, it can be done in two ways: a poll that makes a computer to check one by one device is there to investigate the source of the interrupt and how to use the ISR address stored in the array is known as the interrupt vector in which the system will check Interrupt Vector each time an interrupt occurs.Interrupt architecture must be able to save the address in-interrupt instruction. On older computers, this address is stored in certain places that remain, while new padakomputer, the address stored in the stack together with state information at the time.The structure of I / OThere are two kinds of action if any I / O operations. Both kinds of actions are:After the I / O starts, control returns to user program when the I / O completion (Synchronous). Wait instruction causes the CPU is idle until the next interrupt. Wait loop will occur (to wait for the next access). At most one process I / O is running at a time.After the I / O starts, control returns to user program without waiting for the I / O completion (Asynchronous). System call requests on the operating system to allow the user to wait until the I / O selesai.Device-status table contains the data entered for each I / O device that describes the type, address, and the circumstances. Check the operating system I / O devices to know the state of the device and change the table to include interrupt. If the I / O device to send / retrieve data to / from memory this is known by the name (Direct Memory Access) DMA.




 
The structure of I / O
 
Direct Memory AccessUsed for I / O device that can move data at high speeds (close to the memory bus frequency). Device controller moves data in blocks from buffer storage directly to main memory or vice versa without processor intervention. Interrupt only occurs every block instead of each word or byte data. The whole process is controlled by a DMA controller named DMA Controller (DMAC). DMA controller sends or receives signals from the memory and I / O device. The processor sends only the data starting address, destination data, the length of data to the DMA Controller. . Interrupt the processor only occur during the transfer process is complete. The right to use the required memory bus DMA controller obtained with the help of a bus arbiter that the PC is now a Northbridge chipset.BusA data transfer paths that connect each device on the computer. There is only one device that can transmit data through a bus, but may be more than one device that reads the data bus. Consisting of two models: Synchronous buses where used but with the help of high-speed clock, but only for high-speed devices also; Asynchronous handshake bus used by the system but the low speed, can be used for various devices.


Storage StructureThe important thing to remember is that the program is part of the data.RegisterStorage several volatile data that will be processed directly in very high-speed processor. These registers are in the processor with a very limited number because of its function as a calculation / computation dataCache MemoryTemporary storage (volatile) small amounts of data to increase the speed of retrieval or storage of data in memory by a high-speed processor. Formerly stored outside the processor cache and can be added. For example pipeline burst cache normally found in computers early 90's. However, with decreasing die or wafer production costs and to improve performance, embedded in the processor cache. This memory is usually made based on the design of static memory.Random Access Memory (RAM) - Main MemoryPlace while a number of volatile data storage that can be accessed directly by the processor. Direct sense here means that the processor can find the address data in memory directly. Now, the RAM can be obtained with a fairly low view of performance that can even pass through the cache on an older computer.Memory ExtensionAdditional memory to be used to assist the processes in the computer, usually a buffer. Additional role of memory is often overlooked but very important for the efficiency. Usually additional memory gives a rough idea of ​​the ability of such devices, for example such as the amount of VGA memory, soundcard memory.Secondary StorageStorage medium is non-volatile data that can be either Flash Drive, Optical Discs, Magnetic Disk, Magnetic Tape. This media carrying capacity are usually quite large with a relatively cheap price. Portability is also relatively higher.







 
Disk Structure

 



Structure Optical Drive

 
Storage HierarchyBasic arrangement of storage systems are speed, cost, nature of volatility. Caching copying information into faster storage media; Main memory can be viewed as a last cache for secondary storage. Using high-speed memory to hold data that is accessed last. It takes a cache management policy. The cache also introduces another level in storage hierarchy. This requires data to be stored together in more than one level in order to remain consistent.





Status Process
As just discussed, for the program to be executed, processes, or tasks, created for that program. From the processor point of view, it executes the instruction of the repertoire in some order determined by the values ​​of the changes in the program counter register. Over time, the program counter can refer to the code in different programs that are part of a different process. From the standpoint of individual programs, which involve the execution sequence of instructions in that program. We can characterize the behavior of individual processes by listing a sequence of instructions that implement the process. Such a list called the trace process. We can characterize the behavior of the processor by showing how the traces of the various processes are inserted. We assume that the OS only allows the process to continue execution of instructions to a maximum of six cycles, after which disrupted; This prevents any single process from monopolizing the processor time, six of the first instruction of the process is run, followed by a time-out and execution of some code on the operator, who carry out the instructions before six.In Status Process there are two kinds of models of two-state process models and process models five statusTwo Process Model StatusThe process can be in one of two status- Walking (running) means the program is being carried out the orders and are executed and the exit- Not-running means that the program execution but not in the process of preparation to be executed
  
Five Process Model StatusStatus among the five process model is• New is the process of being worked on / created.• Running Instruction sednag is done.• Waiting is the process is waiting for some event to occur (such as a completion of I / O or acceptance of a sign / signal).• Ready is a process waiting to be assigned to a processor.• The process has been terminated is selsesai carry out their duties / execute.The state diagram view of the circumstances pertaining described in the figure below:
 
The image above shows the possibility of the transition process including the following:• Null-New: A new process is created to run an event occurs the program• New-Ready: the process will move to the new place is ready when it is ready to perform additional processing. Most systems set some limits on the number of existing processes or the amount of virtual memory committed to the process.• Ready - Running: When running the process time to select the OS to choose one of the processes in the ready state. This work is to schedule or dispatch.• Running-Exit: the ongoing process is terminated by the OS if the process shows that it has completed or if it aborts.• Running - Ready: The most common reasons for this transition is a process that has reached the maximum allowed for the uninterrupted implementation of almost all the time multiprogramming operating system determines the type of timely manner.• Running-Blocked: A process is blocked if the request is placed on something that should wait. The demand for OS is usually in the form of system service call is the call of the program to run a procedure that is part of the operating system code.• Blocked-Ready: A process block is moved to Ready when events have been waiting to happen.• Ready-Exit: For clarity, this transition will not be shown on the diagram. In some systems can process at any time the parent is also terminated if the parent terminates, all processes associated with the parent will be terminated.• Blocked - Exit: The following comments apply the previous item.


Critical Issues and Solutions Section Preemptive and Non Preemptive Scheduling
 
CPU scheduling may be executed when the process in a state:1. Changed from running to waiting state.2. Changed from running to ready state.3. Changed from waiting to ready state.4. Terminated.Preemptive scheduling has the meaning of the operating system's ability to suspend running processes to make room for higher-priority process. This scheduling may include scheduling process or M / K. Preemptive scheduling allows the system to better ensure that each process gets a time slice operation. And also make the system more quickly respond to external events (such as there is incoming data) that requires quick reaction of one or several processes. Making Preemptive scheduling system has the advantage of being more responsive than systems that use Non-Preemptive scheduling.In certain times, the process can be grouped into two categories: the process that has Burst M / K is a very long time called I / O Bound, and processes that have a very long CPU Burst called Bound CPU. Sometimes also a system has a condition called busywait, which is when where the system is waiting for input request (such as disk, keyboard, or network). When busywait, the process does not do something productive, but it takes resources from the CPU. With Preemptive scheduling, it can be avoided.In other words, Preemptive scheduling mechanism involves interruptions that interrupt the ongoing process and force the system to determine which processes are to be executed next.Scheduling the number 1 and 4 while the other is Non-Preemptive Preemptive. Scheduling commonly used operating systems today are typically Preemptive. Even some of the scheduling of operating system, eg Linux 2.6, has the capability of the system call Preemptive his (Preemptible kernel). Windows 95, Windows XP, Linux, Unix, AmigaOS, MacOS X, and Windows NT are some examples of operating systems that implement Preemptive scheduling.The length of time allowed for a process executed in the so-called Preemptive scheduling time slice / quantum. Scheduling runs every one unit of time slice to choose which process will run next. When the time slice is too short then the scheduler will take too much processing time, but when the time slice terlau long it allows the process to not be able to respond to events from the outside as fast as expected.Non-Preemptive SchedulingNon-Preemptive Scheduling is one type of scheduling where the operating system is never context switch to the running process to another process. In other words, processes that are running can not be interrupt.Non-Preemptive scheduling process only occurs when:1. Walking from running state to waiting state.2. Terminated.This means the CPU to keep the process until the process was moved to a waiting state or stopped (the process is not disturbed). This method is used by Microsoft Windows 3.1 and Macintosh. This is a method that can be used for specific hardware platforms, because it requires no special hardware (eg a timer that is used to interrupt the Preemptive scheduling method).Critical Section In Solution SoftwareBy using the algorithm-alogoritma whose truth value does not depend on other assumptions, except that each process runs at a speed that is not zero. The algorithm follows:Algorithm IThe algorithm I tried to address the critical section problem for two processes. The algorithm is applied to the rotating system of two processes that want to execute the critical section, so that both processes have to take turns using a critical section.Figure 1 Algorithm I



This algorithm uses a variable called the turn, the turn determine which processes are allowed to enter the critical section and access of data sharing. At first the variable turn initialized 0, P0 means to access the critical section. If turn = 0 and P0 want to use a critical section, then he can access the critical section. When finished executing its critical section, P0 will transform into a turn, which means turn P1 and P1 arrives allowed to access the critical section. When the turn = 1 and P0 want to use a critical section, then P0 P1 must wait until completion using a critical section and change the turn to 0.When a process is waiting, the process is entered into the loop, in which he must constantly check the variables turn up turned into a turn. This process is called busy waiting waiting. Actually busy waiting should be avoided because this process uses the CPU. But for this case, the use of busy waiting is allowed because it usually only takes place in the process of waiting for a short time.In this algorithm the problem arises when there is a process whose turn it enters its critical section but do not use a turn while the others want to access the critical section. Suppose that when turn = 1 and P1 does not use a turn, then turn and remained unchanged 1. Then P0 want to use a critical section, then he should wait until P1 using a critical section and change the turn to 0. This condition does not qualify as P0 progress can not enter the critical section and it was no use critical section, and he must wait for P1 to execute non-critical section to re-enter the critical section. This condition is also not eligible for if bounded waiting in turn to access the critical section P1 but P1 is finished executing all the code and terminate, then there is no guarantee can access the critical section P0 and P0-had to wait forever.Algorithm IIAlgorithm II also tried to solve the critical section problem for two processes. The algorithm is to anticipate problems that arise in the algorithm by changing the use of variable I turn to the flag variable. Store the flag variable process conditions which may enter the critical section. Processes that require access to the critical section will give its flag value to true. While the process does not require critical section lets you set the value flagnya false.Figure 2 Algorithm II
 
A process is allowed to access the critical section if another process does not need a critical section or another process is false flags. But if other processes require critical section (indicated by its flag value true), then the process must wait and "let" another process using a critical section. Here we can see that before entering the critical section a process of seeing another process in advance (through its flag), if another process requires a critical section or not.Initially diinisialisai flags to both processes is false, which means that both processes do not need a critical section. If P0 want to access the critical section, it will change the flag [0] to be true. Then check if P1 P0 will also need a critical section, if flags [1] is false then P0 will use a critical section. But if flags [1] is true then it must wait for P1 P0 using the critical section and change the flag [1] to be false.In this algorithm the problem arises when the two processes simultaneously want a critical section, both processes would set each of its flag to true. Flags set P0 [0] = true, P1 set flag [1] = true. Then check if P1 P0 will require critical section. P0 will see that flag [1] = true, then it will wait until P1 P0 finished using the critical section. But at the same time, P1 will also check whether the P0 requires critical section or not, he will see that flag [0] = true, then it will wait P0 P1 also completed using a critical section. This condition causes the second process that requires a critical section will be waiting for each other and "mutual invite" another process to access the critical section, consequently there is not even accessing the critical section. This condition indicates that Algorithm II is not eligible progress and bounded waiting requirement, since this condition will continue to survive and the second process must wait forever to be able to access the critical section.Algorithm IIIAlgorithm III was found by G.L. Petterson in 1981 and is known also as Algorithm Petterson. Petterson found a simple way to set the process in order to meet the mutual exclusion. This algorithm is the solution to solve the critical section problem in the two processes. The idea of ​​this algorithm is to combine variables that are sharing in Algorithm I and Algorithm II, which is variable and variable turn flag. Same as in Algorithm I and II, the variable turn indicates a turn which processes are allowed to enter the critical section and the variable flag indicates whether a process requires access to a critical section or not.Figure 3 Algorithm III
 
Initially diinisialisai flags to both processes is false, which means that both processes do not require access to the critical section. Then if a process wants to enter the critical section, it will change its flag to true (to give a sign that he needed a critical section) and then the process is to give a turn to his opponents. If the opponent does not want a critical section (its false flag), then the process can use a critical section, and after he finished using the critical section will change its flag to false. But if the opponent also wants the critical section the process opponent who can enter the critical section, and the process must wait until the opponent process of completing its critical section and change its flag to false.Suppose that when P0 requires a critical section, then it will change the flag P0 [0] = true, then turn alter P0 = 1. If P1 has a flag [1] = false, (regardless of the turn) then P0 can access the critical section. However, if P1 also requires a critical section, because the flag [1] = true and turn = 1, then P1 can enter the critical section must wait until the P0 and P1 completed the critical section and change the flag [1] = false, then the P0 can access the critical section.What if both processes require critical section simultaneously? Which processes can access the critical section first? If both processes (P0 and P1) came together, both processes will set each flag to true (flag [0] = true and flag [1] = true), in these conditions can alter turn P0 and P1 = 1 can also be change turn = 0. Process that can access the critical section first is a process that first change to turn his opponent's turn. Suppose first change turn P0 = 1, then P1 will change the turn = 0, since the last turn is 0 then P0 was the one who can access the critical section first and P1 must wait.Algorithm III meet the three conditions required. Progress and bounded waiting requirement is not met in Algorithm I and II can be met by this algorithm because when there is a process that wants to access the critical section and no one uses critical section then certainly there is a process that could use a critical section, and the process does not need to wait forever to be able to enter the critical section.Algorithms Junior BreadTailor Bread algorithm is the solution to the critical section problem in n-pieces of the process. This algorithm is also known as Lamport's Baker's algorithm. The idea of ​​this algorithm is to use the scheduling principles like the ones in place of bread sales. The customers who want to buy bread before should take the first order number and order of those who may purchase is determined by the serial number of each is to the customer.The principle of this algorithm to determine the process to access the same critical section as illustrated above baker. Process likened to a customer whose number n-pieces and each process that requires critical section is numbered determine which processes are allowed to enter the critical section. The numbers given are sequential or ordered, but as well as the serial number in the baker, there is no guarantee that each process gets a different sequence number. To resolve this use another parameter that is the process ID. Since each process has a unique process ID and sorted it can be ascertained only one process can access the critical section at a time. Process that can access the critical section first is a process that has the smallest sequence number. If there are several processes that have the same number then the process that has the smallest ID number that will be served first.Conclusion:Critical section solution must meet the following three conditions:1. Mutual Exclusion2. Progress3. Bounded WaitingAlgorithms I and II proved unable to solve the critical section problem for two processes because no qualified progress and bounded waiting. Algorithm that can solve the critical section problem in the two processes is the Algorithm III. As for the critical section problem in the n-process the fruit can be solved by using Algorithm Junior BreadCritical Solutions Section on HardwareDepending on some specific machine instructions, for example with her disabled interrupts or by locking a particular variable.