Distributed Shared Memory (DSM) - Seminar Paper


Distributed shared memory (DSM)
Abstract
Distributed shared memory (DSM) systems represent a successful hybrid of two parallel computer classes: shared memory multiprocessors and distributed computer systems. They provide the shared memory abstraction in systems with physically distributed memories, and consequently combine the advantages of both approaches. Because of that, the concept of distributed shared memory is recognized as one of the most attractive approaches for building large-scale, high-performance multiprocessor systems. The increasing importance of this subject imposes the need for its thorough understanding. Distributed Shared Memory (DSM) is the abstraction that supports the notion of shared memory in a physically non-shared (distributed) architecture.
Problems in this area logically resemble the cache coherence-related problems in shared memory multiprocessors. The closest relation can be established between algorithmic issues of DSM coherence maintenance strategies and cache coherence maintenance strategies in shared memory multiprocessors.

Introduction 
The distributed shared-memory programming paradigm has been receiving rising attention. Recent developments have resulted in viable distributed shared memory languages that are gaining vendors support, and several early compilers have been developed. This programming model has the potential of achieving a balance between ease-of-programming and performance. As in the shared-memory model, programmers need not to explicitly specify data accesses. Meanwhile, programmers can exploit data locality using a model that enables the placement of data close to the threads that process them, to reduce remote memory accesses. In this compilation, we present the fundamental concepts associated with this programming model. These include execution models, synchronization, and memory consistency. 
With DSM, local as well as remote memories can be accessed in a uniform manner, with the location of the shared region transparent to the application program. This report discusses our experience in implementing a DSM system in a paged virtual memory operating system environment. Close co-operation between the virtual memory management system and the mechanisms that make DSM possible is essential to ensure good performance of the DSM system and to make viable the use of shared memory system in a distributed environment. 
Traditionally, in the uniprocessor Operating System the Resource Manager’s (system’s view) was a processor, information, memory and devices and the Extended Machine’s view (user’s view) were the issue’s relating to virtual concepts with the goal being efficiency, flexibility and robustness. With the recent technological advances such as low-cost high performance PCs and high-speed network interconnection there are several new applications emerging such as Concurrent/parallel systems, distributed systems and Collaborative systems. The Distributed Systems supporting new applications have revolutionized the user and system level perspective.
The user’s view is now transparency: a single computer view of multiple computer systems. 
The system’s view is now of Distribution: a decentralized/autonomous, cooperative/collaborative body. 

Traditionally, shared memory and message passing have been the two paradigms used for interprocess communications and synchronization in multiprocess computations. The primitive forms of IPC i.e. lock files, signals and pipes are restrictive whereas semaphores and shared memory is more appealing than message passing since in the latter, communication between processes is in the form of messages explicitly exchanged between the processes, whereas in the former, communication is affected through variables shared between the processes concerned with shared memory. Also data can be accessed strictly in the order in which they are sent. 
DSM is a major area of interest as it provides a better price/performance ratio, improved speed and reliability but problems arise due to communication delays and consistency. The issues that need to be addressed are complex and impinge on kernel design. This project report is an account of our work in implementing a DSM system that preserver the interface that system V (non distributed) shared memory

SYSTEM V SHARED MEMORY 
In this chapter, we explain a few of the system V shared memory system calls relevant to our implementation of DSM. The basic idea in shared memory systems is illustrated in Fig 2.1. The mapping of shared memory has to be accomplished through system calls since otherwise address spaces of distinct processes are disjoint. 
int shmget (key_t key, int size, int shmflg) 
This system call is used to create a shared memory region. ‘key’ is used to generate a unique shared memory identifier. Processes that wish to share a memory region use the same key. ‘size’ specifies the size of the region in bytes. ‘shmflg’ specifies the region creation conditions. A few of them are mentioned below. 
IPC_CREAT: creates a shared region if one with the ‘key’ does not already exist. 
IPC_EXCL: creates a new shared region with the given key and if one already exists with the given key, it flags an error. 
An important point to note is that no region is actually allocated in this call, only an entry is made in the system data structure (shmid_kernel) and its index into the table of descriptors ( shm_seg[] ) is returned. 

void * shmat (int shmid, void * shmaddr, int shmflg); 

shmat maps the region into the process address space. It is here that page tables are actually allocated. Pages are allocated only when page faults occur. ‘shmid’ is the unique identifier returned by shmget. The ‘shmaddr’ field provides the process flexibility in assigning the address. The shared memory region is assigned space between the stack and the heap space. If address field is zero then it is assigned by the system. ‘shmflg’ specifies the alignment of the shared region. A few values are mentioned below. 
SHM_SHARE_MAU: first available aligned address. 
SHM_RND: attach the nearest page address. 
int shmdt (void * shmaddr); 
shmdt unmaps the region from the virtual address space of the process, undoing whatever shmat had earlier accomplished.
int shmctl (int shmid, int cmd, struct_ds * but); 
‘cmd’ specifies the operation to be performed. For example, to obtain the status of a memory region, to set permissions and to remove a shared memory region. 
IPC_STAT: returns current values of shmid_kernel 
IPC_SET: modifies a number of members found in the shmid_kernel. 
Figure 2.2 shows the various data structures when two processes are sharing memory. Every process occupies one entry in the process table i.e. task[]’. The task_struct’ data structure gives the description of the characteristics of a process, whereas ‘mm_struct’ holds the data needed for memory management. The ‘vm_area_struct’s holds the attributes of a virtual memory area of the process and are connected in the form of an AVL tree. Every shared memory segment occupies one entry in the ‘shm_seg[]’ table. The shmid_kernel data structure has a pointer to the vm_area_struct of the first process that has mapped the shared segment into its virtual memory space (process A). From this vm_area_struct there is a pointer to the next process that has attached the segment and so on. The page which is shared is mapped into the virtual area space of both the processes by entries for this page in the page tables of both processes. Chapter 6 discusses these data structures in greater detail. 

No comments:

Post a Comment

leave your opinion