logging in or signing up Multi Threading ahmedajaz Download Post to : URL : Related Presentations : Share Add to Flag Embed Email Send to Blogs and Networks Add to Channel Uploaded from authorPOINT lite Insert YouTube videos in PowerPont slides with aS Desktop Copy embed code: Embed: Flash iPad Copy Does not support media & animations WordPress Embed Customize Embed URL: Copy Thumbnail: Copy The presentation is successfully added In Your Favorites. Views: 5938 Category: Education License: All Rights Reserved Like it (12) Dislike it (0) Added: June 20, 2009 This Presentation is Public Favorites: 0 Presentation Description All the info required for process multi-threading. Comments Posting comment... By: vishwakarma111 (27 month(s) ago) please send me this ppt on my email id firstname.lastname@example.org Saving..... Post Reply Close Saving..... Edit Comment Close By: vishwakarma111 (27 month(s) ago) please send me this ppt on my email id email@example.com Saving..... Post Reply Close Saving..... Edit Comment Close By: vishwakarma111 (27 month(s) ago) Very explanation on this ppt Saving..... Post Reply Close Saving..... Edit Comment Close By: pangetkayo (30 month(s) ago) can we download your power point about multi threading? and if ever you have a presentation about cpu threading, ca we ask your permission to access it? thank you so much. Saving..... Post Reply Close Saving..... Edit Comment Close By: malayait (31 month(s) ago) please send me the presentation to firstname.lastname@example.org Saving..... Post Reply Close Saving..... Edit Comment Close loading.... See all Premium member Presentation Transcript Multi-threading : Multi-threading What is Multi-threading? : What is Multi-threading? The ability of an operating system to execute different parts of a program, called threads, simultaneously. The programmer must carefully design the program in such a way that all the threads can run at the same time without interfering with each other. Threads compared with processes : Threads compared with processes Processes carry considerable state information, where multiple threads within a process share state as well as memory and other resources Processes have separate address space, where threads share their address space Context switching between threads in the same process is typically faster than context switching between processes. Processes are typically independent, while threads exist as subsets of a process Advantages of Multi-threading : Advantages of Multi-threading If a thread can not use all the computing resources of the CPU (because instructions depend on each other's result), running another thread permits to not leave these idle. If several threads work on the same set of data, they can actually share its caching, leading to better cache usage or synchronization on its values. If a thread gets a lot of cache misses, the other thread(s) can continue, taking advantage of the unused computing resources, which thus can lead to faster overall execution, as these resources would have been idle if only a single thread was executed. Disadvantages of Multi-threading : Disadvantages of Multi-threading Hardware support for Multithreading is more visible to software, thus requiring more changes to both application programs and operating systems than Multiprocessing. Execution times of a single-thread are not improved but can be degraded, even when only one thread is executing. This is due to slower frequencies. Multiple threads can interfere with each other when sharing hardware resources such as caches or buffers. Two levels of thread : Two levels of thread User level(for user thread) Kernel level(for kernel thread)? Slide 7: User Threads User threads are supported above the kernel and are implemented by a thread library at the user level. The library provides support for thread creation, scheduling, and management with no support from the kernel.Because the kernel is unaware of user-level threads, all thread creation and scheduling are done in user space without the need for kernel intervention. User-level threads are generally fast to create and manage User-thread libraries include POSIX Pthreads,Mach C-threads,and Solaris 2 UI-threads. Slide 8: Kernel Threads Kernel threads are supported directly by the operating system:The kernel performs thread creation, scheduling, and management in kernel space. Because thread management is done by the operating system, kernel threads are generally slower to create and manage than are user threads. Most operating systems-including Windows NT, Windows 2000, Solaris 2, BeOS, and Tru64 UNIX (formerly Digital UN1X)-support kernel threads. Multi-threading Models : Multi-threading Models There are three models for thread libraries, each with its own trade-offs many threads on one LWP (many-to-one)? one thread per LWP (one-to-one)? many threads on many LWPs (many-to-many)? Many-to-one : Many-to-one The many-to-one model maps many user-level threads to one kernel thread. Advantages: Totally portable More efficient Disadvantages: cannot take advantage of parallelism The entire process is block if a thread makes a blocking system call Mainly used in language systems, portable libraries like solaris 2 One-to-one : One-to-one The one-to-one model maps each user thread to a kernel thread. Advantages: allows parallelism Provide more concurrency Disadvantages: Each user thread requires corresponding kernel thread limiting the number of total threads Used in LinuxThreads and other systems like Windows 2000,Windows NT Slide 12: Many-to-many The many-to-many model multiplexes many user-level threads to a smaller or equal number of kernel threads. Advantages: Can create as many user thread as necessary Allows parallelism Disadvantages: kernel thread can the burden the performance Used in the Solaris implementation of Pthreads (and several other Unix implementations)? Threading Issues : Threading Issues The fork and exec System Calls Cancellation Signal Handling Thread Pools Thread-Specific Data The fork and exec System Calls : The fork and exec System Calls In a multithreaded program, the semantics of the fork and exec system calls change. Some UNIX systems have chosen to have two versions of fork: one that duplicates all threads another that duplicates only the thread that invoked the fork system call. The exec system call typically works in the same way. the program specified in the parameter to exec will replace the entire process-including all threads and LWPs. Usage of the two versions of fork depends upon the application. If exec is called immediately after forking, then duplicating all threads is unnecessary. The separate process does not call exec after forking, the separate process should duplicate all threads. Cancellation : Cancellation Thread cancellation is the task of terminating a thread before it has completed. A thread that is to be cancelled is often referred to as the target thread. Cancellation of a target thread may occur in two different scenarios: 1. Asynchronous cancellation: One thread immediately terminates the target thread. Most operating systems allow a process or thread to be cancelled asynchronously. 2. Deferred cancellation: The target thread can periodically check if it should terminate, allowing the target thread an opportunity to terminate itself in an orderly fashion. operating system implementing the Pthread API will allow deferred cancellation. Cancellation : Cancellation Disadvantages of asynchronous cancellation if a thread was cancelled while in the middle of updating data it is sharing with other threads. The operating system will often reclaim system resources from a cancelled thread, but often will not reclaim all resources. Most operating systems allow a process or thread to be cancelled asynchronously. Advantage of deferred cancellation cancellation will occur only when the target thread checks to determine if it should be cancelled or not. This allows a thread to check if it should be cancelled at a point when it can safely be cancelled. Pthreads refers to such points as cancellation points. Operating system implementing the Pthread API will allow deferred cancellation. Signal Handling : Signal Handling A signal may be received either synchronously or asynchronously,depending upon the source and the reason for the event being signalled. Whether a signal is synchronous or asynchronous, all signals follow the same pattern: 1. A signal is generated by the occurrence of a particular event. 2. A generated signal is delivered to a process. 3. Once delivered, the signal must be handled. Every signal may be handled by one of two possible handlers: 1. A default signal handler 2. A user-defined signal handler Signal Handling : Signal Handling Every signal has a default signal handler that is run by the kernel when handling the signal. This default action may be overridden by a user-defined signal handler function. Handling signals in single-threaded programs is straightforward; signals are always delivered to a process. delivering signals is more complicated in multithreaded programs, as a process may have several threads. In general, the following options exist: 1. Deliver the signal to the thread to which the signal applies. 2. Deliver the signal to every thread in the process. 3. Deliver the signal to certain threads in the process. 4. Assign a specific thread to receive all signals for the process. Slide 19: Thread Pools The general idea behind a thread pool is to create a number of threads at process startup and place them into a pool, where they sit and wait for work.When a server receives a request, it awakens a thread from this pool-if one is available-passing it the request to service. Once the thread completes its service, it returns to the pool awaiting more work. If the pool contains no available thread, the server waits until one becomes free. The benefits of thread pools are: 1. It is usually faster to service a request with an existing thread than waiting to create a thread. 2. A thread pool limits the number of threads that exist at any one point. This is particularly important on systems that cannot support a large number of concurrent threads. Slide 20: Threads belonging to a process share the data of the process. Indeed, this sharing of data provides one of the benefits of multithreaded programming. However, each thread might need its own copy of certain data in some circumstances. We will call such data thread-specific data. Most thread libraries-including Win32 and Pthreads-provide some form of support for thread-specific data. Java provides support as well. Thread-Specific Data Slide 21: THANKS You do not have the permission to view this presentation. In order to view it, please contact the author of the presentation.