RU/2: Форум. Общение пользователей и разработчиков OS/2 (eCS). : Multithreading (XVI)


Список сообщений | Написать новое | Ответить на сообщение | Домой Поиск:
Предыдущее сообщение | Следующее сообщение
From : ???
To : All
Subj : Multithreading (XVI)

Threads in shared memory with semaphores

OS/2 is a multitasking operating system. The existence of multiple processes and
asynchronous concurrent threads implies the need of mechanismen to allow processes
to exchange data and to synchronize the execution of their threads.

Interprocess communication(IPC) primitives provide these basic features for data sharing
and thread sysnchronization. The IPC facilities of the OS/2 system are organized into a
tiered hierarchy based on the complexecity of the IPC mechanism. The simplest IPC
machnisms are shared memory, semaphores, signals, and exceptions.

These constructs are classified as simple constructs, since the processes that use them
must communicate with one another explicitly. More abstract IPC mechnisms higher in the
hiararchy are built out of the low-level mechanisms.

Queues and named pipes are examples of higher-level abstractions that allow processes
to exchange data and to synchronize their execution. However, the usage of low-level
ICP constructs, such as shared memory and semaphores, is masked from the users of
queues and named pipes.

The highest-level abstraction of IPC mechanisms is the API call. Since each API function
defines an abstraction and a level of information hiding, these functions manage the
usage of any necessary IPCs from requestors. The API abstraction is often used by
application programs that build their own API functions into dynamic link libraries that
are tailored to their specific needs.

Client of the API are not sensitive to the underlying IPC usage of the dynamic link library,
that allows it to be used multiple processes and threads.


Shared Memory


Shared memory is the simplest type of IPC mechanism. Its functions are similar in both
16-bit and 32-bit versions of OS/2.

Run-time shared memory is allocated while a thread is running, whereas lead-time shared
memory is allocated when a process is loaded into memory by DosExecPgm, or when a
library is loaded by DosLoadModule. There are two types of run-time shared memory:
named shared memory and give-get shared memory.

Named shared memory has the name of the shared memory registered in the file-system
name space. It creates a directory entry in the file system that allows access to the
named shared memory by knowing the name of the memory.

Give-get shared memory is anonymous; no name is associated with the memory. Giveable
memory is allocated by a process and is passed to another process explicitly by
specification of the address of the memory and the PID of the process that is being given
the memory. Conversly, gettable memory is acquired by specification of the memory
address from the process that allowed the memory.

Ultimately, access is passed directly from process to process. The give-get shared memory
model is safer to use than is the named shared-memory model, since access to the
memory in the former is controlled directly by the sharing process.

Load-time shared memory is allocated when a process's EXE file and associated DLLs are
loaded into the memory. It consists of shared code and shared data. All code in the OS/2
system, whether it comes from an EXE or a DLL, is shared and is reentrant. It is mapped
at the same address in every virtual address space(as is all shared memory) in both
16-bit and 32-bit systems.

The writer of shared code must keep in mind that shared code accessing shared
resources, such as shared data, must be able to handle being preempted at any time.

Specifying the shared granularity of memory is a way of classifying memory according to
how it is accessed. Thread memory(also called local memory) is memory that consists of
the thread's user stack. It is local to the thread and is mapped within the process virtual
address space.

Process memory is memory mapped within the process virtual address space; it is
accessed and shared by the threads of the process. No API calls are necessary to set up
this sharing - it is part of the multitasking model.

Shared memory is accessed and shared by threads in different processes, and is mapped
into the shared portion of the process virtual address space. Process and shared memory
usually need some type of serialization if the memory is accessed concurrently by multiple
threads. In other words, if threads within a process are using process memory that is not
their own stacks or thread local memory, then the threads need to access that memory
in a controlled fashion to garantee the integrity of the shared data.

When multiple threads in different processes attempt to access shared memory, the same
situation arises.

Shared-address, private-storage memory objects, which are used for 'instance data' in
dynamic link libraries, usually need no serialization unless they are being accessed by
more than one thread within the same process. Instance memory is used for per-process
data within a dynamic link library.

Although shared memory is conceptually simple, it has several weaknesses. The protocol
and layout of the shared memory must be understood by all threads accessing that memory. Also, since there are multiple threads accessing the memory, semaphores or
flags usually are needed to control concurrent access to the shared region.


Semaphores


When multiple threads concurrently execute shared code that access shared data or
serially reusabel shared resources, those threads need mutually exclusive access to the
shared resources. Semaphores are special protected variables with a defined set of
operations that allow threads to synchronize their execution. OS/2 provides two basic
types of semaphores: mutual exclusion semaphores and event synchronization
semaphores.

A critical section of code is a portion of code in which a thread accesses shared,
modifiable data. Only one thread at a time can be allowed to access the modifiable data,
and that thread must exclude other threads from executing the critical section of code
simultanously. Threads not in the critical section continue to run.

So that threads waiting to get into the critical section are not blocked for a long time,
critical sections should be as short and fast as possible, and threads should not blocked
within critical sections if possible.

Critical sections as decribed here are not to be confused with the DosEnterCritSec and
DosExitCritSec API calls (see: m019897.html and m019990.html ) that enable and disable thread switching within a
process. These API calls provide a coarse granularity of synchronization that is valid only
for threads within a process. Disbaling thread switching within a process can also have
bad side effects on threads within a process.

Semaphores provide more reliable robust mechnism for managing critical sections!

A mutual exclusion semaphore is used to serialize access of threads to shared modifiable
data or resources. When a thread wants to enter the critical section, it requests
ownership of the semaphore. If the semaphore is unowned, then there is no thread in the
critical section; the requesting thread is given ownership of the semaphore and proceeds
to execute the critical section of code.

If, while this thread is in the critical section, another threads attempts to enter that
critical section, the second thread's request for semaphore ownership blocks until the
first thread exits the critical section and releases ownership of the semaphore.

At the time the thread within the critical section exits, the highest-priority thread that
has blocked requesting ownership of the semaphore is awakened and is given ownership
of the semaphore.

Event synchronization semaphores are used when one or more threads need to wait for a
single event to occur. Event semaphores have no concept of ownership. They have two
possible states: set or clear.

When a thread needs to wait for an event to occur, it performs a wait operation on an
event semaphore. If the event semaphore is in the set state, the thread blocks until the
event occurs and the semaphore is cleared. If more than one thread is waiting on the
event when the semaphore is cleared, all the threads waiting on the event are notified
that the event has occurred, and the threads are made runnable.

Another type of event synchronization is the muxwait operation; it is used when a thread
needs to wait on multiple semaphores simultaneously.

Like shared memory, semaphores also have several drawbacks for IPCs. Processes sharing
the resources must understand the semaphore semantics and the shared-memory format.
Queues and pipes are higher-level IPC abstractions that utilize semaphores and shared
memory in a way that is transparent for the requestors.






Mon 16 Feb 2004 18:09 Mozilla/4.61 [en] (OS/2; U)




Programmed by Dmitri Maximovich, Dmitry I. Platonoff, Eugen Kuleshov.
25.09.99 (c) 1999, RU/2. All rights reserved.
Rewritten by Dmitry Ban. All rights ignored.