next up previous contents
Next: 2. Flow structure Up: mainexpand Previous: Contents   Contents

Subsections

1. Introduction

1.1 Specifications

The Meso-NH Atmospheric Simulation System is a joint effort of Mto-France and Centre National de la Recherche Scientifique (CNRS) (See details in "The Meso-NH Atmospheric Simulation System: Scientific Documentation). It has been developped in Fortran 90 and uses Unix procedures. In a first stage it has been developped to be run on mono-processor computer.

The purpose of present documentation is to describe an effort at CERFACS, CNRM and LA to develop tools and techniques for implementing the Meso-NH code on parallel processor computers.

The main specifications Meso-NH parallelization are:

To meet these goals, an interface library has been developped. It contains all routines necessary to parallelize the Meso-NH model and is based on the standard library MPI (Message Passing Interface). Current development is focusing on the parallelization of multiple nested models which requires additional interface routines to perform exchanges data between parent and child models, both decomposed over processors. The present interface library has been developped from its beginning in this spirit.

1.2 Definitions

To treat the lateral boundary conditions in the Meso-NH model, the model arrays are currently over-dimensioned by one grid point (see Fig. 1). The physical domain corresponds to the inner area where the physical calculations are valid. The extended domain, where the arrays are defined is the whole domain including the outer points. The width of this additional area JPHEXT is parameterized. At present time, only JPHEXT=1 is implemented.

Figure 1: Horizontal grid structure
\begin{figure}
\centerline {\psfig{figure=Figures/grid.ps}}\end{figure}

The domain decomposition used for parallelization is 2D, i.e. the physical domain is splitted into horizontal subdomains in x and y directions (see Fig. 2). The arrays of each subdomain are allocated on a different processor. Each processor is responsible for all calculations on its physical subdomain. The set of physical subdomains completely overlays the physical domain. The physical subdomains do not overlapped each other.

Finite differences require data along the border of adjacent physical subdomains, so the distributed arrays are defined over extended subdomain. The data located on the overlap area (named halo) are computed and communicated by processors corresponding to the adjacent subdomains. The width of the overlap area JPHALO can be different with JPHEXT and can be easily adjusted. The extended subdomain is composed by the physical subdomain and halo or outer points.

Figure 2: Domain decomposition
\begin{figure}
\centerline {\psfig{figure=Figures/domdec.ps}}\end{figure}

A variable (or an array) is local when it refers to the subdomain and its value differs for each processor. A variable is global when it refers to the whole domain, its name is suffixed by _ll and its value is the same for all processors. For example the size in the x direction of the physical domain is called NIMAX_ll and the one of the physical subdomain is NIMAX with a different value on each processor.

Due to data distribution, the use of SIZE function to retrieve some informations should be avoided. Special functions have been developped to provided size and position of physical or extended subdomain (see below).




The present documentation describes the parallelization of Meso-NH code for one model. The flow structure of the parallelized version of MesoNH is described in Section 2. The interface library is described in Section 3 and the contents of each routines are detailed in the Annexe. Exemples using these routines are also given in this section. Specific routines has been written to manage the I/0 dataflow (Section 4). General idea of the data structure is given in Section 5.


next up previous contents
Next: 2. Flow structure Up: mainexpand Previous: Contents   Contents
serveur WWW de Meso-NH
2001-11-15