RU/2: Форум. Общение пользователей и разработчиков OS/2 (eCS). : Multithreading (V)


Список сообщений | Написать новое | Ответить на сообщение | Домой Поиск:
Предыдущее сообщение | Следующее сообщение
From : ???
To : All
Subj : Multithreading (V)

Scheduling on SMP

Scheduling threads on Symmetric Multi Processing(SMP) machines is conceptually not
much different than on uniprocessor machines. There are many extra things that the
operating system must account for; such as if a processor goes down, or if an application
or device driver disabels interrupts(should interrupts for the system or only that
processor be disabled?), among other things. For the user, however, not much is different.

On uniprocessor machines, you know that more than one thread is physically executing
at any moment in time. You should not, but many developers do, make assumptions
based on this fact. Most assumptions made are on operations being atomic and some
being executed in a specific order. In an SMP environment, multiple instructions may be
excecuting simultaneously, invalidating that assumptions.

By writing your code assuming nothing about the order of execution of functions or
timing, you can be assured of being "SMP-safe".

The SMP scheduler schedules tasks in a similar fashion to the uniprocessor scheduler,
except that it does this n times, where n is the number of processors in the system.
This scheduling is dynamic, with standard priority modifications as on a uniprocessor
system. Rather than the highest priority thread being the one that is running, however,
the n highest priority threads that are ready to run will be actively executing in a
processor.

The order details of how this works are not all that relevant to application writers,
but more to device driver writers. Suffice it to say that the scalability you might see
on other SMP-anabled operating systems is there on OS/2 as well.

That scability is based purely on how well threaded your application is. If you have only
one thread, users of your application will not experience the same benefits on an SMP
machine as if your application has several threads. Don't get the idea of adding lots
of threads to run better on SMP machines; rather, you should thread appropriately based
on the function and application, knowing that the move efficiently you do it, the better
you will perform on SMP as well as uniprocessor boxes.

Application threading concepts and structure will be a wide field of examples.


Thread Management - A first view
********************************

Threads are manged though various control and synchronization structures. The
management of the threads and the use of these control structures is completely up to
the designer and programmer. Of course, there is the control built into the OS/2 system
services, but most of the control of the threads, the amount of parallel activity within
an application, and how that activity is managed is up to the developers.

The basic control structures are semaphores, queues, and pipes. Actually, the only
control structure in that list is the semaphore. There is another structure, although
not used often, called a CRITICAL SECTION.

A SEMAPHORE is a structure that when implemented properly will ensure that only
one thread in a process can manipulate a resource. A semaphore is like a flag, that only
one thread can have captured at any moment. If another thread requests the flag while
the first one still has it(has access to the resource), this second thread will be blocked
until the first completes what it is doing and clears the semaphore.

You could implement this simply by using a software flag, and indeed, that is one type
of semaphore available to you. However, by letting OS/2 manage the semaphore control,
you ensure consistency in all your code and between your code and the other code that
is in the system. It is important that you use semaphores for any data that can be
accessed by multiple threads, whether it be in a disk or in memory.

OS/2 also provides several flavors of semaphores. Each type of semaphore service a
specific purpose. For example, aside from the standard synchronization semaphore, there
is the MUTEX (mutual exclusion) semaphore that is used to control access to a shared
data structure. A way to control access to a structure would be to put all access to the
structure in one function that requests a MUTEX semaphore on entry and releases on
exit. This way, if one thread is accessing the structure, others will be made to wait on
the semaphore.

Another type of semaphore is the EVENT semaphore. Whereas the MUTEX semaphore
controls access to structures, the EVENT semaphore is used to manage order of execution
among threads. The EVENT semaphore signals an event to all waiting threads when
posted. EVENT semaphores could actually be used as MUTEX semaphores as well, given
the proper coding around the structure, but OS/2 provides both.

Yet another type of semaphores is the MUXWAIT semaphore. This is a semaphore that
allows a thread to wait on a list of events, not just one at a time. The OS/2 semaphore
mechanism allows programmers the flexibility to manage threads at a variety and
synchronize them for any situation.


Thread Priority I/O - A deeper view
*********************************

Threads in the regular class have their priorites modified by the system(see: Multi-
threading (IV)). However, threads in any of the other classes(time-critical, fixed-high and
idle) are set. When any thread is first created it lives in the regular class. Once
DosSetPriority is called to move a thread to one of the other three classes, the system
will not mess with the thread's priority unless another call to DosSetPriority subsequently
sets it to the regular class again.

Here is where you want to work with the priorities. Just as with other part of the
application, there is no real formula for determing what the priorities should be. There
are some general guidelines, however:

(1.) The first thread to look at is your main user interface thread, the one that lives in
the WinGetMsg loop and dispatches messages to the window procedure. This thread
should remain in the regular class.

(2.) If the application is in the foreground, the system will boost the priority of this thread
enough to remain very responsive to the user. Don't make it higher than regular; other-
wise, when this application is put in the background any messages posted to it will get
higher priority than even the foreground threads of other applications. Obvisously, you
don't want to move than regular class either.

The next threads we want to look at are the i/O threads. Usually these threads volun-
tarily block, for reasons that comes from the handling of device drivers. When a thread
requests some I/O, a series of calls is made, ending up at the device driver. When the
request is passed from the device driver to the device, the device driver blocks the
thread until the physical I/O is complete.

The thread is than made ready when the interrupt comes back from the device. Because
these threads will block often and I/O is usually a slow operation, you will likely want
to set the priority of your I/O-bound threads higher. Your will not use up CPU time
needlessly because these threads spend more time blocked than most others.

If you leave I/O-bound threads in the regular class you will have to rely on the system
to bump the priority on an I/O boost, which may or may not be higher than other threads
running. By setting the thread someplace low within the fixed-high class, you keep the
I/O threads ready to run whenever they come out of being blocked, but not so high that
they will preempt more time-critical operations.

The next type of thread is teh CPU-bound thread. This one that has no or very few
dependencies on an I/O device. Such a thread might be one that recalculates spreadsheets
or empties buffers. These threads do not usually block until their task is finished and they
wait for another. If you bump this thread above the regular class it will always run to the
end of its time slice unless another even higher-priority thread comes along. You should
leave these threads in the regular class. If you do find a reason to move them higher,
keep them low within the fixed-high class.

The higher priorities such as those in the time-critical class should be reserved for
communications threads and others for which data will be lost if they are starved when
ready. If you move your recalc threads into the time-critical class just to improve perfor-
mance you will hurt other applications in the system. Communications threads, however,
should be in the time-critical class. If this type of thread does not get serviced when it
is in the ready state, the communications line to whatever is on the other end could be
dropped. Applications of these threads include standard modem communications packages,
data acquisition equipment, or even pipes between processes.

When deciding on priorities to use for threads, keep in mind not only how the thread will
interact with other processes in the system but also how they will interact with the
other threads in your applications. For example, consider one thread that reads or writes
files and another that files and emptities buffers for the I/O. If both threads are of the
same priority(outside the regular class, otherwise the system would modify the priorities)
you can end up with a situation in which the CPU-bound thread starves the I/O-bound
thread.

If the buffers are filled when the CPU thread comes in, it will run to the end of its time
slice unless the buffers come empty. If this thread is of the same priority as the I/O
thread, the CPU thread will be switched in every time the I/O thread goes to read the
disk(which is all it does). The I/O thread, coming back from a device driver I/O request,
then has to wait for the CPU thread to complete before it can do its work. Of couse,
if the buffers empty before the CPU thread is done with its time sclice, it will need to
block(you need to do this through your thread synchronization).

In this state you will not be using both threads at peak efficiency. The I/O thread will
send off a read request and block. The CPU thread will be dispatched and will empty
whatever is in the buffers until either the buffers are empty or the I/O thread is ready
to run at the end of the CPU thread's time slice. Now the buffers get filled with the full
contents of the read buffer, and another I/O request is sent(unless, of couse, the I/O
thread's time slice ends, in which case the CPU thread comes back in). Now, when the
I/O thread blocks again, the CPU thread empties the buffers, which will ususally not
be full. You can see how this thrashing can hurt performance.

It is essential to look carefully at the scenarios and interactions between your threads
to be sure you either starve your own application nor give yourself a higher priority than
threads that truly need it.

One other note about threads:

When using idle-class threads, it is important to set the priority back to the regular class
before calling the OS/2 API DosExit. The reason for this is that if you call DosExit in the
thread in the routine.

This will prevent the application from terminating. Therefore, before calling DosExit in
any idle-class threads, change those threads' priority to regular class.










Fri 26 Dec 2003 17:30 Mozilla/4.61 [en] (OS/2; U)




Programmed by Dmitri Maximovich, Dmitry I. Platonoff, Eugen Kuleshov.
25.09.99 (c) 1999, RU/2. All rights reserved.
Rewritten by Dmitry Ban. All rights ignored.