Advantages of process cooperation
- Information sharing
- Computation speed-up
- Modularity
- Convenience
Disadvantage of process cooperation
- Data corruption, deadlocks, increased complexity
- Requires processes to synchronize their processing
Event Flags
Shared Memory
If threads are in different process, we need to request a segment of shared memory from kernel, and processes communicate through this shared memory by reading and writing data on it.
If threads are in the same process, there are several types of shared memory:
1) Shared global variable: global variables within source code of multiple threads.
2) Shared private data: private variables whose address are given to other threads.
3) Static variable: all static variables within a shared function will become shared data. All functions which are called in a shared function are also shared functions.
With shared memory, synchronization should be achieved by processes/threads themselves to avoid data corruption.
If threads are in the same process, there are several types of shared memory:
1) Shared global variable: global variables within source code of multiple threads.
2) Shared private data: private variables whose address are given to other threads.
3) Static variable: all static variables within a shared function will become shared data. All functions which are called in a shared function are also shared functions.
With shared memory, synchronization should be achieved by processes/threads themselves to avoid data corruption.
Message Passing
In message passing, there is no shared memory between processes. Using kernel function, kernel will copy data from the source process to kernel space, and then copy the data from kernel space to destination process. There are basically two operations,send(pid, message) and receive(pid, message). Synchronization and data protection are handled by kernel.
Blocking & Non-Blocking
- Blocking Send — sender blocked until message received by mailbox or process
- Nonblocking Send — sender resumes operation immediately after sending
- Blocking Receive — receiver blocks until a message is available
- Nonblocking Receive — receiver returns immediately with either a valid or null message.
Buffering
All messaging system require framework to temporarily buffer messages. These queues are implemented in one of three ways:
- Zero Capacity — No messages may be queued within the link, requires sender to block until receives retrieves message.
- Bounded Capacity — Link has finite number of message buffers. If no buffers are available(buffer is full) then sender must block until one is freed up.
- Unbounded Capacity — Link has unlimited buffer space, consequently send never needs to block. It can be approximated by linked list, while it still has a boundary (when heap section is full).
沒有留言:
張貼留言