Welcome to MITgcmās user manualĀ¶
AuthorsĀ¶
Alistair Adcroft, JeanMichel Campin, Ed Doddridge, Stephanie Dutkiewicz, Constantinos Evangelinos, David Ferreira, Mick Follows, Gael Forget, Baylor FoxKemper, Patrick Heimbach, Chris Hill, Ed Hill, Helen Hill, Oliver Jahn, Jody Klymak, Martin Losch, John Marshall, Guillaume Maze, Matt Mazloff, Dimitris Menemenlis, Andrea Molod, and Jeff Scott
OverviewĀ¶
This document provides the reader with the information necessary to carry out numerical experiments using MITgcm. It gives a comprehensive description of the continuous equations on which the model is based, the numerical algorithms the model employs and a description of the associated program code. Along with the hydrodynamical kernel, physical and biogeochemical parameterizations of key atmospheric and oceanic processes are available. A number of examples illustrating the use of the model in both process and general circulation studies of the atmosphere and ocean are also presented.
IntroductionĀ¶
MITgcm has a number of novel aspects:
 it can be used to study both atmospheric and oceanic phenomena; one hydrodynamical kernel is used to drive forward both atmospheric and oceanic models  see Figure 1.1
 it has a nonhydrostatic capability and so can be used to study both smallscale and large scale processes  see Figure 1.2
 finite volume techniques are employed yielding an intuitive discretization and support for the treatment of irregular geometries using orthogonal curvilinear grids and shaved cells  see Figure 1.3
 tangent linear and adjoint counterparts are automatically maintained along with the forward model, permitting sensitivity and optimization studies.
 the model is developed to perform efficiently on a wide variety of computational platforms.
Key publications reporting on and charting the development of the model are Hill and Marshall (1995), Marshall et al. (1997a), Marshall et al. (1997b), Adcroft and Marshall (1997), Marshall et al. (1998), Adcroft and Marshall (1999), Hill et al. (1999), Marotzke et al. (1999), Adcroft and Campin (2004), Adcroft et al. (2004b), Marshall et al. (2004) (an overview on the model formulation can also be found in Adcroft et al. (2004c)):
Hill, C. and J. Marshall, (1995) Application of a Parallel NavierStokes Model to Ocean Circulation in Parallel Computational Fluid Dynamics, In Proceedings of Parallel Computational Fluid Dynamics: Implementations and Results Using Parallel Computers, 545552. Elsevier Science B.V.: New York [HM95]
Marshall, J., C. Hill, L. Perelman, and A. Adcroft, (1997a) Hydrostatic, quasihydrostatic, and nonhydrostatic ocean modeling, J. Geophysical Res., 102(C3), 57335752 [MHPA97]
Marshall, J., A. Adcroft, C. Hill, L. Perelman, and C. Heisey, (1997b) A finitevolume, incompressible Navier Stokes model for studies of the ocean on parallel computers, J. Geophysical Res., 102(C3), 57535766 [MAH+97]
Adcroft, A.J., Hill, C.N. and J. Marshall, (1997) Representation of topography by shaved cells in a height coordinate ocean model, Mon Wea Rev, 125, 22932315 [AHM97]
Marshall, J., Jones, H. and C. Hill, (1998) Efficient ocean modeling using nonhydrostatic algorithms, Journal of Marine Systems, 18, 115134 [MJH98]
Adcroft, A., Hill C. and J. Marshall: (1999) A new treatment of the Coriolis terms in Cgrid models at both high and low resolutions, Mon. Wea. Rev., 127, 19281936 [AHM99]
Hill, C, Adcroft,A., Jamous,D., and J. Marshall, (1999) A Strategy for Terascale Climate Modeling, In Proceedings of the Eighth ECMWF Workshop on the Use of Parallel Processors in Meteorology, 406425 World Scientific Publishing Co: UK [HAJM99]
Marotzke, J, Giering,R., Zhang, K.Q., Stammer,D., Hill,C., and T.Lee, (1999) Construction of the adjoint MIT ocean general circulation model and application to Atlantic heat transport variability, J. Geophysical Res., 104(C12), 29,52929,547 [MGZ+99]
A. Adcroft and J.M. Campin, (2004a) Rescaled height coordinates for accurate representation of freesurface flows in ocean circulation models, Ocean Modelling, 7, 269ā284 [AC04]
A. Adcroft, J.M. Campin, C. Hill, and J. Marshall, (2004b) Implementation of an atmosphereocean general circulation model on the expanded spherical cube, Mon Wea Rev , 132, 2845ā2863 [ACHM04]
J. Marshall, A. Adcroft, J.M. Campin, C. Hill, and A. White, (2004) Atmosphereocean modeling exploiting fluid isomorphisms, Mon. Wea. Rev., 132, 2882ā2894 [MAC+04]
A. Adcroft, C. Hill, J.M. Campin, J. Marshall, and P. Heimbach, (2004c) Overview of the formulation and numerics of the MITgcm, In Proceedings of the ECMWF seminar series on Numerical Methods, Recent developments in numerical methods for atmosphere and ocean modelling, 139ā149. URL: http://mitgcm.org/pdfs/ECMWF2004Adcroft.pdf [AHJMC+04]
We begin by briefly showing some of the results of the model in action to give a feel for the wide range of problems that can be addressed using it.
Illustrations of the model in actionĀ¶
MITgcm has been designed and used to model a wide range of phenomena, from convection on the scale of meters in the ocean to the global pattern of atmospheric winds  see Figure 1.2. To give a flavor of the kinds of problems the model has been used to study, we briefly describe some of them here. A more detailed description of the underlying formulation, numerical algorithm and implementation that lie behind these calculations is given later. Indeed many of the illustrative examples shown below can be easily reproduced: simply download the model (the minimum you need is a PC running Linux, together with a FORTRAN77 compiler) and follow the examples described in detail in the documentation.
Global atmosphere: āHeldSuarezā benchmarkĀ¶
A novel feature of MITgcm is its ability to simulate, using one basic algorithm, both atmospheric and oceanographic flows at both small and large scales.
Figure 1.4 shows an instantaneous plot of the 500 mb temperature field obtained using the atmospheric isomorph of MITgcm run at 2.8Ā° resolution on the cubed sphere. We see cold air over the pole (blue) and warm air along an equatorial band (red). Fully developed baroclinic eddies spawned in the northern hemisphere storm track are evident. There are no mountains or landsea contrast in this calculation, but you can easily put them in. The model is driven by relaxation to a radiativeconvective equilibrium profile, following the description set out in Held and Suarez (1994) [HS94] designed to test atmospheric hydrodynamical cores  there are no mountains or landsea contrast.
As described in Adcroft et al. (2004) [ACHM04], a ācubed sphereā is used to discretize the globe permitting a uniform griding and obviated the need to Fourier filter. The āvectorinvariantā form of MITgcm supports any orthogonal curvilinear grid, of which the cubed sphere is just one of many choices.
Figure 1.5 shows the 5year mean, zonally averaged zonal wind from a 20level configuration of the model. It compares favorable with more conventional spatial discretization approaches. The two plots show the field calculated using the cubesphere grid and the flow calculated using a regular, spherical polar latitudelongitude grid. Both grids are supported within the model.
Ocean gyresĀ¶
Baroclinic instability is a ubiquitous process in the ocean, as well as the atmosphere. Ocean eddies play an important role in modifying the hydrographic structure and current systems of the oceans. Coarse resolution models of the oceans cannot resolve the eddy field and yield rather broad, diffusive patterns of ocean currents. But if the resolution of our models is increased until the baroclinic instability process is resolved, numerical solutions of a different and much more realistic kind, can be obtained.
Figure 1.6 shows the surface temperature and velocity field obtained from MITgcm run at \(\frac{1}{6}^{\circ}\) horizontal resolution on a latlon grid in which the pole has been rotated by 90Ā° on to the equator (to avoid the converging of meridian in northern latitudes). 21 vertical levels are used in the vertical with a ālopped cellā representation of topography. The development and propagation of anomalously warm and cold eddies can be clearly seen in the Gulf Stream region. The transport of warm water northward by the mean flow of the Gulf Stream is also clearly visible.
Global ocean circulationĀ¶
Figure 1.7 shows the pattern of ocean currents at the surface of a 4Ā° global ocean model run with 15 vertical levels. Lopped cells are used to represent topography on a regular latlon grid extending from 70Ā°N to 70Ā°S. The model is driven using monthlymean winds with mixed boundary conditions on temperature and salinity at the surface. The transfer properties of ocean eddies, convection and mixing is parameterized in this model.
Figure 1.8 shows the meridional overturning circulation of the global ocean in Sverdrups.
Convection and mixing over topographyĀ¶
Dense plumes generated by localized cooling on the continental shelf of the ocean may be influenced by rotation when the deformation radius is smaller than the width of the cooling region. Rather than gravity plumes, the mechanism for moving dense fluid down the shelf is then through geostrophic eddies. The simulation shown in Figure 1.9 (blue is cold dense fluid, red is warmer, lighter fluid) employs the nonhydrostatic capability of MITgcm to trigger convection by surface cooling. The cold, dense water falls down the slope but is deflected along the slope by rotation. It is found that entrainment in the vertical plane is reduced when rotational control is strong, and replaced by lateral entrainment due to the baroclinic instability of the alongslope current.
Boundary forced internal wavesĀ¶
The unique ability of MITgcm to treat nonhydrostatic dynamics in the presence of complex geometry makes it an ideal tool to study internal wave dynamics and mixing in oceanic canyons and ridges driven by large amplitude barotropic tidal currents imposed through open boundary conditions.
Figure 1.10 shows the influence of crossslope topographic variations on internal wave breaking  the crossslope velocity is in color, the density contoured. The internal waves are excited by application of open boundary conditions on the left. They propagate to the sloping boundary (represented using MITgcmās finite volume spatial discretization) where they break under nonhydrostatic dynamics.
Parameter sensitivity using the adjoint of MITgcmĀ¶
Forward and tangent linear counterparts of MITgcm are supported using an āautomatic adjoint compilerā. These can be used in parameter sensitivity and data assimilation studies.
As one example of application of the MITgcm adjoint, Figure 1.11 maps the gradient \(\frac{\partial J}{\partial\mathcal{H}}\) where \(J\) is the magnitude of the overturning streamfunction shown in Figure 1.8 at 60Ā°N and \(\mathcal{H}(\lambda,\varphi)\) is the mean, local airsea heat flux over a 100 year period. We see that \(J\) is sensitive to heat fluxes over the Labrador Sea, one of the important sources of deep water for the thermohaline circulations. This calculation also yields sensitivities to all other model parameters.
Global state estimation of the oceanĀ¶
An important application of MITgcm is in state estimation of the global ocean circulation. An appropriately defined ācost functionā, which measures the departure of the model from observations (both remotely sensed and insitu) over an interval of time, is minimized by adjusting ācontrol parametersā such as airsea fluxes, the wind field, the initial conditions etc. Figure 1.12 and Figure 1.13 show the large scale planetary circulation and a HopfMuller plot of Equatorial seasurface height. Both are obtained from assimilation bringing the model in to consistency with altimetric and insitu observations over the period 19921997.
Ocean biogeochemical cyclesĀ¶
MITgcm is being used to study global biogeochemical cycles in the ocean. For example one can study the effects of interannual changes in meteorological forcing and upper ocean circulation on the fluxes of carbon dioxide and oxygen between the ocean and atmosphere. Figure 1.14 shows the annual airsea flux of oxygen and its relation to density outcrops in the southern oceans from a single year of a global, interannually varying simulation. The simulation is run at 1Ā°x1Ā° resolution telescoping to \(\frac{1}{3}^{\circ}\) x \(\frac{1}{3}^{\circ}\) in the tropics (not shown).
Simulations of laboratory experimentsĀ¶
Figure 1.16 shows MITgcm being used to simulate a laboratory experiment (Figure 1.15) inquiring into the dynamics of the Antarctic Circumpolar Current (ACC). An initially homogeneous tank of water (1 m in diameter) is driven from its free surface by a rotating heated disk. The combined action of mechanical and thermal forcing creates a lens of fluid which becomes baroclinically unstable. The stratification and depth of penetration of the lens is arrested by its instability in a process analogous to that which sets the stratification of the ACC.
Continuous equations in ārā coordinatesĀ¶
To render atmosphere and ocean models from one dynamical core we exploit āisomorphismsā between equation sets that govern the evolution of the respective fluids  see Figure 1.17. One system of hydrodynamical equations is written down and encoded. The model variables have different interpretations depending on whether the atmosphere or ocean is being studied. Thus, for example, the vertical coordinate ā\(r\)ā is interpreted as pressure, \(p\), if we are modeling the atmosphere (right hand side of Figure 1.17) and height, \(z\), if we are modeling the ocean (left hand side of Figure 1.17).
The state of the fluid at any time is characterized by the distribution of velocity \(\vec{\mathbf{v}}\), active tracers \(\theta\) and \(S\), a āgeopotentialā \(\phi\) and density \(\rho =\rho (\theta ,S,p)\) which may depend on \(\theta\), \(S\), and \(p\). The equations that govern the evolution of these fields, obtained by applying the laws of classical mechanics and thermodynamics to a Boussinesq, NavierStokes fluid are, written in terms of a generic vertical coordinate, \(r\), so that the appropriate kinematic boundary conditions can be applied isomorphically see Figure 1.18.
Here:
with \(\mathbf{\nabla }_{h}\) operating in the horizontal and \(\widehat{k} \frac{\partial }{\partial r}\) operating in the vertical, where \(\widehat{k}\) is a unit vector in the vertical
The \(\mathcal{F}^{\prime }s\) and \(\mathcal{Q}^{\prime }s\) are provided by āphysicsā and forcing packages for atmosphere and ocean. These are described in later chapters.
Kinematic Boundary conditionsĀ¶
VerticalĀ¶
at fixed and moving \(r\) surfaces we set (see Figure 1.18):
Here
where \(R_{o}(x,y)\) is the ā\(r\)valueā (height or pressure, depending on whether we are in the atmosphere or ocean) of the āmoving surfaceā in the resting fluid and \(\eta\) is the departure from \(R_{o}(x,y)\) in the presence of motion.
AtmosphereĀ¶
In the atmosphere, (see Figure 1.18), we interpret:
where
In the above the ideal gas law, \(p=\rho RT\), has been expressed in terms of the Exner function \(\Pi (p)\) given by (1.16) (see also Section 1.4.1)
where \(p_{c}\) is a reference pressure and \(\kappa =R/c_{p}\) with \(R\) the gas constant and \(c_{p}\) the specific heat of air at constant pressure.
At the top of the atmosphere (which is āfixedā in our \(r\) coordinate):
In a resting atmosphere the elevation of the mountains at the bottom is given by
i.e. the (hydrostatic) pressure at the top of the mountains in a resting atmosphere.
The boundary conditions at top and bottom are given by:
Then the (hydrostatic form of) equations (1.1)(1.6) yields a consistent set of atmospheric equations which, for convenience, are written out in \(p\)coordinates in Section 1.4.1  see eqs. (1.59)(1.63).
OceanĀ¶
In the ocean we interpret:
where \(\rho _{c}\) is a fixed reference density of water and \(g\) is the acceleration due to gravity.
In the above:
At the bottom of the ocean: \(R_{fixed}(x,y)=H(x,y)\).
The surface of the ocean is given by: \(R_{moving}=\eta\)
The position of the resting free surface of the ocean is given by \(R_{o}=Z_{o}=0\).
Boundary conditions are:
where \(\eta\) is the elevation of the free surface.
Then equations (1.1) (1.6) yield a consistent set of oceanic equations which, for convenience, are written out in \(z\)coordinates in Section 1.5.1  see eqs. (1.98) to (1.103).
Hydrostatic, Quasihydrostatic, Quasinonhydrostatic and Nonhydrostatic formsĀ¶
Let us separate \(\phi\) in to surface, hydrostatic and nonhydrostatic terms:
and write (1.1) in the form:
Here \(\epsilon _{nh}\) is a nonhydrostatic parameter.
The \(\left( \vec{\mathbf{G}}_{\vec{v}},G_{\dot{r}}\right)\) in (1.26) and (1.28) represent advective, metric and Coriolis terms in the momentum equations. In spherical coordinates they take the form [1]  see Marshall et al. (1997a) [MHPA97] for a full discussion:
In the above ā\({r}\)ā is the distance from the center of the earth and ā\(\varphi\) ā is latitude (see Figure 1.20).
Grad and div operators in spherical coordinates are defined in Coordinate systems.
Shallow atmosphere approximationĀ¶
Most models are based on the āhydrostatic primitive equationsā (HPEās) in which the vertical momentum equation is reduced to a statement of hydrostatic balance and the ātraditional approximationā is made in which the Coriolis force is treated approximately and the shallow atmosphere approximation is made. MITgcm need not make the ātraditional approximationā. To be able to support consistent nonhydrostatic forms the shallow atmosphere approximation can be relaxed  when dividing through by \(r\) in, for example, (1.29), we do not replace \(r\) by \(a\), the radius of the earth.
Hydrostatic and quasihydrostatic formsĀ¶
These are discussed at length in Marshall et al. (1997a) [MHPA97].
In the āhydrostatic primitive equationsā (HPE) all the underlined terms in Eqs. (1.29) \(\rightarrow\) (1.31) are neglected and ā\({r}\)ā is replaced by ā\(a\)ā, the mean radius of the earth. Once the pressure is found at one level  e.g. by inverting a 2d Elliptic equation for \(\phi _{s}\) at \(r=R_{moving}\)  the pressure can be computed at all other levels by integration of the hydrostatic relation, eq (1.27).
In the āquasihydrostaticā equations (QH) strict balance between gravity and vertical pressure gradients is not imposed. The \(2\Omega u\cos\varphi\) Coriolis term are not neglected and are balanced by a nonhydrostatic contribution to the pressure field: only the terms underlined twice in Eqs. (1.29) \(\rightarrow\) (1.31) are set to zero and, simultaneously, the shallow atmosphere approximation is relaxed. In QH all the metric terms are retained and the full variation of the radial position of a particle monitored. The QH vertical momentum equation (1.28) becomes:
making a small correction to the hydrostatic pressure.
QH has good energetic credentials  they are the same as for HPE. Importantly, however, it has the same angular momentum principle as the full nonhydrostatic model (NH)  see Marshall et.al. (1997a) [MHPA97]. As in HPE only a 2d elliptic problem need be solved.
Nonhydrostatic and quasinonhydrostatic formsĀ¶
MITgcm presently supports a full nonhydrostatic ocean isomorph, but only a quasinonhydrostatic atmospheric isomorph.
Nonhydrostatic OceanĀ¶
In the nonhydrostatic ocean model all terms in equations Eqs. (1.29) \(\rightarrow\) (1.31) are retained. A three dimensional elliptic equation must be solved subject to Neumann boundary conditions (see below). It is important to note that use of the full NH does not admit any new āfastā waves in to the system  the incompressible condition (1.3) has already filtered out acoustic modes. It does, however, ensure that the gravity waves are treated accurately with an exact dispersion relation. The NH set has a complete angular momentum principle and consistent energetics  see White and Bromley (1995) [WB95]; Marshall et al. (1997a) [MHPA97].
Summary of equation sets supported by modelĀ¶
AtmosphereĀ¶
Hydrostatic, and quasihydrostatic and quasi nonhydrostatic forms of the compressible nonBoussinesq equations in \(p\)coordinates are supported.
The hydrostatic set is written out in \(p\)coordinates in Hydrostatic Primitive Equations for the Atmosphere in Pressure Coordinates  see eqs. (1.59) to (1.63).
A quasinonhydrostatic form is also supported.
OceanĀ¶
Hydrostatic, and quasihydrostatic forms of the incompressible Boussinesq equations in \(z\)coordinates are supported.
Nonhydrostatic forms of the incompressible Boussinesq equations in \(z\) coordinates are supported  see eqs. (1.98) to (1.103).
[1]  In the hydrostatic primitive equations (HPE) all underlined terms in (1.29), (1.30) and (1.31) are omitted; the singlyunderlined terms are included in the quasihydrostatic model (QH). The fully nonhydrostatic model (NH) includes all terms. 
Solution strategyĀ¶
The method of solution employed in the HPE, QH and NH models is summarized in Figure 1.19. Under all dynamics, a 2d elliptic equation is first solved to find the surface pressure and the hydrostatic pressure at any level computed from the weight of fluid above. Under HPE and QH dynamics, the horizontal momentum equations are then stepped forward and \(\dot{r}\) found from continuity. Under NH dynamics a 3d elliptic equation must be solved for the nonhydrostatic pressure before stepping forward the horizontal momentum equations; \(\dot{r}\) is found by stepping forward the vertical momentum equation.
There is no penalty in implementing QH over HPE except, of course, some complication that goes with the inclusion of \(\cos \varphi \ \) Coriolis terms and the relaxation of the shallow atmosphere approximation. But this leads to negligible increase in computation. In NH, in contrast, one additional elliptic equation  a threedimensional one  must be inverted for \(p_{nh}\). However the āoverheadā of the NH model is essentially negligible in the hydrostatic limit (see detailed discussion in Marshall et al. (1997) [MHPA97] resulting in a nonhydrostatic algorithm that, in the hydrostatic limit, is as computationally economic as the HPEs.
Finding the pressure fieldĀ¶
Unlike the prognostic variables \(u\), \(v\), \(w\), \(\theta\) and \(S\), the pressure field must be obtained diagnostically. We proceed, as before, by dividing the total (pressure/geo) potential in to three parts, a surface part, \(\phi _{s}(x,y)\), a hydrostatic part \(\phi _{hyd}(x,y,r)\) and a nonhydrostatic part \(\phi _{nh}(x,y,r)\), as in (1.25), and writing the momentum equation as in (1.26).
Hydrostatic pressureĀ¶
Hydrostatic pressure is obtained by integrating (1.27) vertically from \(r=R_{o}\) where \(\phi _{hyd}(r=R_{o})=0\), to yield:
and so
The model can be easily modified to accommodate a loading term (e.g atmospheric pressure pushing down on the oceanās surface) by setting:
Surface pressureĀ¶
The surface pressure equation can be obtained by integrating continuity, (1.3), vertically from \(r=R_{fixed}\) to \(r=R_{moving}\)
Thus:
where \(\eta =R_{moving}R_{o}\) is the freesurface \(r\)anomaly in units of \(r\). The above can be rearranged to yield, using Leibnitzās theorem:
where we have incorporated a source term.
Whether \(\phi\) is pressure (ocean model, \(p/\rho _{c}\)) or geopotential (atmospheric model), in (1.26), the horizontal gradient term can be written
where \(b_{s}\) is the buoyancy at the surface.
In the hydrostatic limit (\(\epsilon _{nh}=0\)), equations (1.26), (1.35) and (1.36) can be solved by inverting a 2d elliptic equation for \(\phi _{s}\) as described in Chapter 2. Both āfree surfaceā and ārigid lidā approaches are available.
Nonhydrostatic pressureĀ¶
Taking the horizontal divergence of (1.26) and adding \(\frac{\partial }{\partial r}\) of (1.28), invoking the continuity equation (1.3), we deduce that:
For a given rhs this 3d elliptic equation must be inverted for \(\phi _{nh}\) subject to appropriate choice of boundary conditions. This method is usually called The Pressure Method [Harlow and Welch (1965) [HW65]; Williams (1969) [Wil69]; Potter (1973) [Pot73]. In the hydrostatic primitive equations case (HPE), the 3d problem does not need to be solved.
Boundary ConditionsĀ¶
We apply the condition of no normal flow through all solid boundaries  the coasts (in the ocean) and the bottom:
where \(\widehat{n}\) is a vector of unit length normal to the boundary. The kinematic condition (1.38) is also applied to the vertical velocity at \(r=R_{moving}\). Noslip \(\left( v_{T}=0\right) \ \)or slip \(\left( \partial v_{T}/\partial n=0\right) \ \)conditions are employed on the tangential component of velocity, \(v_{T}\), at all solid boundaries, depending on the form chosen for the dissipative terms in the momentum equations  see below.
Eq. (1.38) implies, making use of (1.26), that:
where
presenting inhomogeneous Neumann boundary conditions to the Elliptic problem (1.37). As shown, for example, by Williams (1969) [Wil69], one can exploit classical 3D potential theory and, by introducing an appropriately chosen \(\delta\)function sheet of āsourcechargeā, replace the inhomogeneous boundary condition on pressure by a homogeneous one. The source term \(rhs\) in (1.37) is the divergence of the vector \(\vec{\mathbf{F}}.\) By simultaneously setting \(\widehat{n}.\vec{\mathbf{F}}=0\) and \(\widehat{n}.\nabla \phi _{nh}=0\ \)on the boundary the following selfconsistent but simpler homogenized Elliptic problem is obtained:
where \(\widetilde{\vec{\mathbf{F}}}\) is a modified \(\vec{\mathbf{F}}\) such that \(\widetilde{\vec{\mathbf{F}}}.\widehat{n}=0\). As is implied by (1.39) the modified boundary condition becomes:
If the flow is ācloseā to hydrostatic balance then the 3d inversion converges rapidly because \(\phi _{nh}\ \)is then only a small correction to the hydrostatic pressure field (see the discussion in Marshall et al. (1997a,b) [MHPA97] [MAH+97].
The solution \(\phi _{nh}\ \)to (1.37) and (1.39) does not vanish at \(r=R_{moving}\), and so refines the pressure there.
Forcing/dissipationĀ¶
ForcingĀ¶
The forcing terms \(\mathcal{F}\) on the rhs of the equations are provided by āphysics packagesā and forcing packages. These are described later on.
DissipationĀ¶
MomentumĀ¶
Many forms of momentum dissipation are available in the model. Laplacian and biharmonic frictions are commonly used:
where \(A_{h}\) and \(A_{v}\ \)are (constant) horizontal and vertical viscosity coefficients and \(A_{4}\ \)is the horizontal coefficient for biharmonic friction. These coefficients are the same for all velocity components.
TracersĀ¶
The mixing terms for the temperature and salinity equations have a similar form to that of momentum except that the diffusion tensor can be nondiagonal and have varying coefficients.
where \(\underline{\underline{K}}\ \)is the diffusion tensor and the \(K_{4}\ \) horizontal coefficient for biharmonic diffusion. In the simplest case where the subgridscale fluxes of heat and salt are parameterized with constant horizontal and vertical diffusion coefficients, \(\underline{\underline{K}}\), reduces to a diagonal matrix with constant coefficients:
where \(K_{h}\ \)and \(K_{v}\ \)are the horizontal and vertical diffusion coefficients. These coefficients are the same for all tracers (temperature, salinity ā¦ ).
Vector invariant formĀ¶
For some purposes it is advantageous to write momentum advection in eq (1.1) and (1.2) in the (socalled) āvector invariantā form:
This permits alternative numerical treatments of the nonlinear terms based on their representation as a vorticity flux. Because gradients of coordinate vectors no longer appear on the rhs of (1.44), explicit representation of the metric terms in (1.29), (1.30) and (1.31), can be avoided: information about the geometry is contained in the areas and lengths of the volumes used to discretize the model.
Appendix ATMOSPHEREĀ¶
Hydrostatic Primitive Equations for the Atmosphere in Pressure CoordinatesĀ¶
The hydrostatic primitive equations (HPEās) in \(p\)coordinates are:
where \(\vec{\mathbf{v}}_{h}=(u,v,0)\) is the āhorizontalā (on pressure surfaces) component of velocity, \(\frac{D}{Dt}=\frac{\partial}{\partial t}+\vec{\mathbf{v}}_{h}\cdot \mathbf{\nabla }_{p}+\omega \frac{\partial }{\partial p}\) is the total derivative, \(f=2\Omega \sin \varphi\) is the Coriolis parameter, \(\phi =gz\) is the geopotential, \(\alpha =1/\rho\) is the specific volume, \(\omega =\frac{Dp }{Dt}\) is the vertical velocity in the \(p\)coordinate. Equation (1.49) is the first law of thermodynamics where internal energy \(e=c_{v}T\), \(T\) is temperature, \(Q\) is the rate of heating per unit mass and \(p\frac{D\alpha }{Dt}\) is the work done by the fluid in compressing.
It is convenient to cast the heat equation in terms of potential temperature \(\theta\) so that it looks more like a generic conservation law. Differentiating (1.48) we get:
which, when added to the heat equation (1.49) and using \(c_{p}=c_{v}+R\), gives:
Potential temperature is defined:
where \(p_{c}\) is a reference pressure and \(\kappa =R/c_{p}\). For convenience we will make use of the Exner function \(\Pi (p)\) which is defined by:
The following relations will be useful and are easily expressed in terms of the Exner function:
where \(b=\frac{\partial \ \Pi }{\partial p}\theta\) is the buoyancy.
The heat equation is obtained by noting that
and on substituting into (1.50) gives:
which is in conservative form.
For convenience in the model we prefer to step forward (1.53) rather than (1.49).
Boundary conditionsĀ¶
The upper and lower boundary conditions are:
In \(p\)coordinates, the upper boundary acts like a solid boundary (\(\omega=0\) ); in \(z\)coordinates the lower boundary is analogous to a free surface (\(\phi\) is imposed and \(\omega \neq 0\)).
Splitting the geopotentialĀ¶
For the purposes of initialization and reducing roundoff errors, the model deals with perturbations from reference (or āstandardā) profiles. For example, the hydrostatic geopotential associated with the resting atmosphere is not dynamically relevant and can therefore be subtracted from the equations. The equations written in terms of perturbations are obtained by substituting the following definitions into the previous model equations:
The reference state (indicated by subscript āoā) corresponds to horizontally homogeneous atmosphere at rest (\(\theta _{o},\alpha _{o},\phi_{o}\)) with surface pressure \(p_{o}(x,y)\) that satisfies \(\phi_{o}(p_{o})=g~Z_{topo}\), defined:
The final form of the HPEās in \(p\)coordinates is then:
Appendix OCEANĀ¶
Equations of Motion for the OceanĀ¶
We review here the method by which the standard (Boussinesq, incompressible) HPEās for the ocean written in \(z\)coordinates are obtained. The nonBoussinesq equations for oceanic motion are:
These equations permit acoustics modes, inertiagravity waves, nonhydrostatic motions, a geostrophic (Rossby) mode and a thermohaline mode. As written, they cannot be integrated forward consistently  if we step \(\rho\) forward in (1.66), the answer will not be consistent with that obtained by stepping (1.68) and (1.69) and then using (1.67) to yield \(\rho\). It is therefore necessary to manipulate the system as follows. Differentiating the EOS (equation of state) gives:
Note that \(\frac{\partial \rho }{\partial p}=\frac{1}{c_{s}^{2}}\) is the reciprocal of the sound speed (\(c_{s}\)) squared. Substituting into (1.66) gives:
where we have used an approximation sign to indicate that we have assumed adiabatic motion, dropping the \(\frac{D\theta }{Dt}\) and \(\frac{DS}{Dt}\). Replacing (1.66) with (1.71) yields a system that can be explicitly integrated forward:
Compressible zcoordinate equationsĀ¶
Here we linearize the acoustic modes by replacing \(\rho\) with \(\rho _{o}(z)\) wherever it appears in a product (ie. nonlinear term)  this is the āBoussinesq assumptionā. The only term that then retains the full variation in \(\rho\) is the gravitational acceleration:
These equations still retain acoustic modes. But, because the ācompressibleā terms are linearized, the pressure equation (1.80) can be integrated implicitly with ease (the timedependent term appears as a Helmholtz term in the nonhydrostatic pressure equation). These are the truly compressible Boussinesq equations. Note that the EOS must have the same pressure dependency as the linearized pressure term, ie. \(\left. \frac{\partial \rho }{\partial p}\right _{\theta ,S}=\frac{1}{c_{s}^{2}}\), for consistency.
āAnelasticā zcoordinate equationsĀ¶
The anelastic approximation filters the acoustic mode by removing the timedependency in the continuity (now pressure) equation (1.80). This could be done simply by noting that \(\frac{Dp}{Dt}\approx g\rho _{o} \frac{Dz}{Dt}=g\rho _{o}w\), but this leads to an inconsistency between continuity and EOS. A better solution is to change the dependency on pressure in the EOS by splitting the pressure into a reference function of height and a perturbation:
Remembering that the term \(\frac{Dp}{Dt}\) in continuity comes from differentiating the EOS, the continuity equation then becomes:
If the time and spacescales of the motions of interest are longer than those of acoustic modes, then \(\frac{Dp^{\prime }}{Dt}<<(\frac{Dp_{o}}{Dt}, \mathbf{\nabla }\cdot \vec{\mathbf{v}}_{h})\) in the continuity equations and \(\left. \frac{\partial \rho }{\partial p}\right _{\theta ,S}\frac{ Dp^{\prime }}{Dt}<<\left. \frac{\partial \rho }{\partial p}\right _{\theta ,S}\frac{Dp_{o}}{Dt}\) in the EOS (1.70). Thus we set \(\epsilon_{s}=0\), removing the dependency on \(p^{\prime }\) in the continuity equation and EOS. Expanding \(\frac{Dp_{o}(z)}{Dt}=g\rho _{o}w\) then leads to the anelastic continuity equation:
A slightly different route leads to the quasiBoussinesq continuity equation where we use the scaling \(\frac{\partial \rho ^{\prime }}{\partial t}+ \mathbf{\nabla }_{3}\cdot \rho ^{\prime }\vec{\mathbf{v}}<<\mathbf{\nabla } _{3}\cdot \rho _{o}\vec{\mathbf{v}}\) yielding:
Equations (1.84) and (1.85) are in fact the same equation if:
Again, note that if \(\rho _{o}\) is evaluated from prescribed \(\theta _{o}\) and \(S_{o}\) profiles, then the EOS dependency on \(p_{o}\) and the term \(\frac{g}{c_{s}^{2}}\) in continuity should be referred to those same profiles. The full set of āquasiBoussinesqā or āanelasticā equations for the ocean are then:
Incompressible zcoordinate equationsĀ¶
Here, the objective is to drop the depth dependence of \(\rho _{o}\) and so, technically, to also remove the dependence of \(\rho\) on \(p_{o}\). This would yield the ātrulyā incompressible Boussinesq equations:
where \(\rho _{c}\) is a constant reference density of water.
Compressible nondivergent equationsĀ¶
The above āincompressibleā equations are incompressible in both the flow and the density. In many oceanic applications, however, it is important to retain compressibility effects in the density. To do this we must split the density thus:
We then assert that variations with depth of \(\rho _{o}\) are unimportant while the compressible effects in \(\rho ^{\prime }\) are:
This then yields what we can call the semicompressible Boussinesq equations:
Note that the hydrostatic pressure of the resting fluid, including that associated with \(\rho _{c}\), is subtracted out since it has no effect on the dynamics.
Though necessary, the assumptions that go into these equations are messy since we essentially assume a different EOS for the reference density and the perturbation density. Nevertheless, it is the hydrostatic (\(\epsilon_{nh}=0\)) form of these equations that are used throughout the ocean modeling community and referred to as the primitive equations (HPEās).
Appendix OPERATORSĀ¶
Coordinate systemsĀ¶
Spherical coordinatesĀ¶
In spherical coordinates, the velocity components in the zonal, meridional and vertical direction respectively, are given by:
(see Figure 1.20) Here \(\varphi\) is the latitude, \(\lambda\) the longitude, \(r\) the radial distance of the particle from the center of the earth, \(\Omega\) is the angular speed of rotation of the Earth and \(D/Dt\) is the total derivative.
The āgradā (\(\nabla\)) and ādivā (\(\nabla\cdot\)) operators are defined by, in spherical coordinates:
Discretization and AlgorithmĀ¶
This chapter lays out the numerical schemes that are employed in the core MITgcm algorithm. Whenever possible links are made to actual program code in the MITgcm implementation. The chapter begins with a discussion of the temporal discretization used in MITgcm. This discussion is followed by sections that describe the spatial discretization. The schemes employed for momentum terms are described first, afterwards the schemes that apply to passive and dynamically active tracers are described.
NotationĀ¶
Because of the particularity of the vertical direction in stratified fluid context, in this chapter, the vector notations are mostly used for the horizontal component: the horizontal part of a vector is simply written \(\vec{\bf v}\) (instead of \({\bf v_h}\) or \(\vec{\mathbf{v}}_{h}\) in chapter 1) and a 3D vector is simply written \(\vec{v}\) (instead of \(\vec{\mathbf{v}}\) in chapter 1).
The notations we use to describe the discrete formulation of the model are summarized as follows.
General notation:
Basic operators:
TimesteppingĀ¶
The equations of motion integrated by the model involve four prognostic equations for flow, \(u\) and \(v\), temperature, \(\theta\), and salt/moisture, \(S\), and three diagnostic equations for vertical flow, \(w\), density/buoyancy, \(\rho\)/\(b\), and pressure/geopotential, \(\phi_{hyd}\). In addition, the surface pressure or height may by described by either a prognostic or diagnostic equation and if nonhydrostatics terms are included then a diagnostic equation for nonhydrostatic pressure is also solved. The combination of prognostic and diagnostic equations requires a model algorithm that can march forward prognostic variables while satisfying constraints imposed by diagnostic equations.
Since the model comes in several flavors and formulation, it would be confusing to present the model algorithm exactly as written into code along with all the switches and optional terms. Instead, we present the algorithm for each of the basic formulations which are:
 the semiimplicit pressure method for hydrostatic equations with a rigidlid, variables colocated in time and with AdamsBashforth timestepping;
 as 1 but with an implicit linear freesurface;
 as 1 or 2 but with variables staggered in time;
 as 1 or 2 but with nonhydrostatic terms included;
 as 2 or 3 but with nonlinear freesurface.
In all the above configurations it is also possible to substitute the AdamsBashforth with an alternative timestepping scheme for terms evaluated explicitly in time. Since the overarching algorithm is independent of the particular timestepping scheme chosen we will describe first the overarching algorithm, known as the pressure method, with a rigidlid model in Section 2.3. This algorithm is essentially unchanged, apart for some coefficients, when the rigid lid assumption is replaced with a linearized implicit freesurface, described in Section 2.4. These two flavors of the pressuremethod encompass all formulations of the model as it exists today. The integration of explicit in time terms is outlined in Section 2.5 and put into the context of the overall algorithm in Section 2.7 and Section 2.8. Inclusion of nonhydrostatic terms requires applying the pressure method in three dimensions instead of two and this algorithm modification is described in Section 2.9. Finally, the freesurface equation may be treated more exactly, including nonlinear terms, and this is described in Section 2.10.2.
Pressure method with rigidlidĀ¶
The horizontal momentum and continuity equations for the ocean ((1.98) and (1.100)), or for the atmosphere ((1.45) and (1.47)), can be summarized by:
where we are adopting the oceanic notation for brevity. All terms in the momentum equations, except for surface pressure gradient, are encapsulated in the \(G\) vector. The continuity equation, when integrated over the fluid depth, \(H\), and with the rigidlid/no normal flow boundary conditions applied, becomes:
Here, \(H\widehat{u} = \int_H u dz\) is the depth integral of \(u\), similarly for \(H\widehat{v}\). The rigidlid approximation sets \(w=0\) at the lid so that it does not move but allows a pressure to be exerted on the fluid by the lid. The horizontal momentum equations and vertically integrated continuity equation are be discretized in time and space as follows:
As written here, terms on the LHS all involve time level \(n+1\) and are referred to as implicit; the implicit backward time stepping scheme is being used. All other terms in the RHS are explicit in time. The thermodynamic quantities are integrated forward in time in parallel with the flow and will be discussed later. For the purposes of describing the pressure method it suffices to say that the hydrostatic pressure gradient is explicit and so can be included in the vector \(G\).
Substituting the two momentum equations into the depth integrated continuity equation eliminates \(u^{n+1}\) and \(v^{n+1}\) yielding an elliptic equation for \(\eta^{n+1}\). Equations (2.2), (2.3) and (2.4) can then be rearranged as follows:
Equations (2.5) to (2.9), solved sequentially, represent the pressure method algorithm used in the model. The essence of the pressure method lies in the fact that any explicit prediction for the flow would lead to a divergence flow field so a pressure field must be found that keeps the flow nondivergent over each step of the integration. The particular location in time of the pressure field is somewhat ambiguous; in Figure 2.1 we depicted as colocated with the future flow field (time level \(n+1\)) but it could equally have been drawn as staggered in time with the flow.
The correspondence to the code is as follows:
 the prognostic phase, equations (2.5) and (2.6), stepping forward \(u^n\) and \(v^n\) to \(u^{*}\) and \(v^{*}\) is coded in timestep.F
 the vertical integration, \(H \widehat{u^*}\) and \(H \widehat{v^*}\), divergence and inversion of the elliptic operator in equation (2.7) is coded in solve_for_pressure.F
 finally, the new flow field at time level \(n+1\) given by equations (2.8) and (2.9) is calculated in correction_step.F
The calling tree for these routines is as follows:
Pressure method calling tree
\(\phantom{W}\) DYNAMICS\(\phantom{W}\) SOLVE_FOR_PRESSURE\(\phantom{WW}\) CALC_DIV_GHAT \(\phantom{xxxxxxxxxxxxxxxx}\) \(H\widehat{u^*},H\widehat{v^*}\) (2.7)\(\phantom{W}\) MOMENTUM_CORRECTION_STEP
In general, the horizontal momentum timestepping can contain some terms that are treated implicitly in time, such as the vertical viscosity when using the backward timestepping scheme (implicitViscosity =.TRUE.). The method used to solve those implicit terms is provided in Section 2.6, and modifies equations (2.2) and (2.3) to give:
Pressure method with implicit linear freesurfaceĀ¶
The rigidlid approximation filters out external gravity waves subsequently modifying the dispersion relation of barotropic Rossby waves. The discrete form of the elliptic equation has some zero eigenvalues which makes it a potentially tricky or inefficient problem to solve.
The rigidlid approximation can be easily replaced by a linearization of the freesurface equation which can be written:
which differs from the depth integrated continuity equation with rigidlid ((2.1)) by the timedependent term and freshwater source term.
Equation (2.4) in the rigidlid pressure method is then replaced by the time discretization of (2.10) which is:
where the use of flow at time level \(n+1\) makes the method implicit and backward in time. This is the preferred scheme since it still filters the fast unresolved wave motions by damping them. A centered scheme, such as CrankNicholson (see Section 2.10.1), would alias the energy of the fast modes onto slower modes of motion.
As for the rigidlid pressure method, equations (2.2), (2.3) and (2.11) can be rearranged as follows:
Equations (2.12) to (2.17), solved sequentially, represent the pressure method algorithm with a backward implicit, linearized free surface. The method is still formerly a pressure method because in the limit of large \(\Delta t\) the rigidlid method is recovered. However, the implicit treatment of the freesurface allows the flow to be divergent and for the surface pressure/elevation to respond on a finite timescale (as opposed to instantly). To recover the rigidlid formulation, we use a switchlike variable, \(\epsilon_{fs}\) (freesurfFac), which selects between the freesurface and rigidlid; \(\epsilon_{fs}=1\) allows the freesurface to evolve; \(\epsilon_{fs}=0\) imposes the rigidlid. The evolution in time and location of variables is exactly as it was for the rigidlid model so that Figure 2.1 is still applicable. Similarly, the calling sequence, given here, is as for the pressuremethod.
Explicit timestepping: AdamsBashforthĀ¶
In describing the the pressure method above we deferred describing the time discretization of the explicit terms. We have historically used the quasisecond order AdamsBashforth method for all explicit terms in both the momentum and tracer equations. This is still the default mode of operation but it is now possible to use alternate schemes for tracers (see Section 2.16). In the previous sections, we summarized an explicit scheme as:
where \(\tau\) could be any prognostic variable (\(u\), \(v\), \(\theta\) or \(S\)) and \(\tau^*\) is an explicit estimate of \(\tau^{n+1}\) and would be exact if not for implicitintime terms. The parenthesis about \(n+1/2\) indicates that the term is explicit and extrapolated forward in time and for this we use the quasisecond order AdamsBashforth method:
This is a linear extrapolation, forward in time, to
\(t=(n+1/2+{\epsilon_{AB}})\Delta t\). An extrapolation to the
midpoint in time, \(t=(n+1/2)\Delta t\), corresponding to
\(\epsilon_{AB}=0\), would be second order accurate but is weakly
unstable for oscillatory terms. A small but finite value for
\(\epsilon_{AB}\) stabilizes the method. Strictly speaking, damping
terms such as diffusion and dissipation, and fixed terms (forcing), do
not need to be inside the AdamsBashforth extrapolation. However, in the
current code, it is simpler to include these terms and this can be
justified if the flow and forcing evolves smoothly. Problems can, and
do, arise when forcing or motions are high frequency and this
corresponds to a reduced stability compared to a simple forward
timestepping of such terms. The model offers the possibility to leave
terms outside the AdamsBashforth extrapolation, by turning off the logical flag forcing_In_AB
(parameter file data
, namelist PARM01
, default value = TRUE) and then setting tracForcingOutAB
(default=0), momForcingOutAB (default=0), and momDissip_In_AB (parameter file data
, namelist PARM01
,
default value = TRUE), respectively for the tracer terms, momentum forcing terms, and the dissipation terms.
A stability analysis for an oscillation equation should be given at this point.
A stability analysis for a relaxation equation should be given at this point.
Implicit timestepping: backward methodĀ¶
Vertical diffusion and viscosity can be treated implicitly in time using the backward method which is an intrinsic scheme. Recently, the option to treat the vertical advection implicitly has been added, but not yet tested; therefore, the description hereafter is limited to diffusion and viscosity. For tracers, the time discretized equation is:
where \(G_\tau^{(n+1/2)}\) is the remaining explicit terms extrapolated using the AdamsBashforth method as described above. Equation (2.20) can be split split into:
where \({\cal L}_\tau^{1}\) is the inverse of the operator
Equation (2.21) looks exactly as (2.18) while (2.22) involves an operator or matrix inversion. By rearranging (2.20) in this way we have cast the method as an explicit prediction step and an implicit step allowing the latter to be inserted into the over all algorithm with minimal interference.
The calling sequence for stepping forward a tracer variable such as temperature with implicit diffusion is as follows:
AdamsBashforth calling tree
\(\phantom{W}\) THERMODYNAMICS\(\phantom{WW}\) TEMP_INTEGRATE\(\phantom{WWW}\) either\(\phantom{WWW}\) or\(\phantom{WWWW}\) EXTERNAL_FORCING \(\phantom{xxxx}\) \(G_\theta^{(n+1/2)} = G_\theta^{(n+1/2)} + {\cal Q}\)
In order to fit within the pressure method, the implicit viscosity must not alter the barotropic flow. In other words, it can only redistribute momentum in the vertical. The upshot of this is that although vertical viscosity may be backward implicit and unconditionally stable, noslip boundary conditions may not be made implicit and are thus cast as a an explicit drag term.
Synchronous timestepping: variables colocated in timeĀ¶
The AdamsBashforth extrapolation of explicit tendencies fits neatly into the pressure method algorithm when all state variables are colocated in time. The algorithm can be represented by the sequential solution of the follow equations:
Figure 2.3 illustrates the location of variables in time and evolution of the algorithm with time. The AdamsBashforth extrapolation of the tracer tendencies is illustrated by the dashed arrow, the prediction at \(n+1\) is indicated by the solid arc. Inversion of the implicit terms, \({\cal L}^{1}_{\theta,S}\), then yields the new tracer fields at \(n+1\). All these operations are carried out in subroutine THERMODYNAMICS and subsidiaries, which correspond to equations (2.23) to (2.26). Similarly illustrated is the AdamsBashforth extrapolation of accelerations, stepping forward and solving of implicit viscosity and surface pressure gradient terms, corresponding to equations (2.28) to (2.34). These operations are carried out in subroutines DYNAMICS, SOLVE_FOR_PRESSURE and MOMENTUM_CORRECTION_STEP. This, then, represents an entire algorithm for stepping forward the model one timestep. The corresponding calling tree for the overall synchronous algorithm using AdamsBashforth timestepping is given below. The place where the model geometry hFac factors) is updated is added here but is only relevant for the nonlinear freesurface algorithm. For completeness, the external forcing, ocean and atmospheric physics have been added, although they are mainly optional.
Synchronous AdamsBashforth calling tree
\(\phantom{WWW}\) EXTERNAL_FIELDS_LOAD\(\phantom{WWW}\) DO_ATMOSPHERIC_PHYS\(\phantom{WWW}\) DO_OCEANIC_PHYS\(\phantom{WW}\) THERMODYNAMICS\(\phantom{WWW}\) CALC_GT\(\phantom{WWWW}\) GAD_CALC_RHS \(\phantom{xxxxxxxxxxxxxlwww}\) \(G_\theta^n = G_\theta( u, \theta^n )\) (2.23)\(\phantom{WWWW}\) EXTERNAL_FORCING \(\phantom{xxxxxxxxxxlww}\) \(G_\theta^n = G_\theta^n + {\cal Q}\)\(\phantom{WW}\) DYNAMICS\(\phantom{WWW}\) TIMESTEP \(\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx}\) \(\vec{\bf v}^*\) (2.29), (2.30)\(\phantom{WW}\) SOLVE_FOR_PRESSURE\(\phantom{WW}\) MOMENTUM_CORRECTION_STEP\(\phantom{WW}\) TRACERS_CORRECTION_STEP\(\phantom{WWW}\) CONVECTIVE_ADJUSTMENT
Staggered baroclinic timesteppingĀ¶
For wellstratified problems, internal gravity waves may be the limiting process for determining a stable timestep. In the circumstance, it is more efficient to stagger in time the thermodynamic variables with the flow variables. Figure 2.4 illustrates the staggering and algorithm. The key difference between this and Figure 2.3 is that the thermodynamic variables are solved after the dynamics, using the recently updated flow field. This essentially allows the gravity wave terms to leapfrog in time giving second order accuracy and more stability.
The essential change in the staggered algorithm is that the thermodynamics solver is delayed from half a time step, allowing the use of the most recent velocities to compute the advection terms. Once the thermodynamics fields are updated, the hydrostatic pressure is computed to step forward the dynamics. Note that the pressure gradient must also be taken out of the AdamsBashforth extrapolation. Also, retaining the integer timelevels, \(n\) and \(n+1\), does not give a user the sense of where variables are located in time. Instead, we rewrite the entire algorithm, (2.23) to (2.34), annotating the position in time of variables appropriately:
The corresponding calling tree is given below. The staggered algorithm is
activated with the runtime flag staggerTimeStep =.TRUE. in
parameter file data
, namelist PARM01
.
Staggered AdamsBashforth calling tree
\(\phantom{WWW}\) EXTERNAL_FIELDS_LOAD\(\phantom{WWW}\) DO_ATMOSPHERIC_PHYS\(\phantom{WWW}\) DO_OCEANIC_PHYS\(\phantom{WW}\) DYNAMICS\(\phantom{WWW}\) TIMESTEP \(\phantom{xxxxxxxxxxxxxxxxxxxxxxxxxx}\) \(\vec{\bf v}^*\) (2.37), (2.38)\(\phantom{WW}\) SOLVE_FOR_PRESSURE\(\phantom{WW}\) MOMENTUM_CORRECTION_STEP\(\phantom{WW}\) THERMODYNAMICS\(\phantom{WWW}\) CALC_GT\(\phantom{WWWW}\) GAD_CALC_RHS \(\phantom{xxxxxxxxxxxxxlwww}\) \(G_\theta^n = G_\theta( u, \theta^n )\) (2.43)\(\phantom{WWWW}\) EXTERNAL_FORCING \(\phantom{xxxxxxxxxxlww}\) \(G_\theta^n = G_\theta^n + {\cal Q}\)\(\phantom{WW}\) TRACERS_CORRECTION_STEP\(\phantom{WWW}\) CONVECTIVE_ADJUSTMENT
The only difficulty with this approach is apparent in equation (2.43) and illustrated by the dotted arrow connecting \(u,v^{n+1/2}\) with \(G_\theta^{n}\). The flow used to advect tracers around is not naturally located in time. This could be avoided by applying the AdamsBashforth extrapolation to the tracer field itself and advecting that around but this approach is not yet available. Weāre not aware of any detrimental effect of this feature. The difficulty lies mainly in interpretation of what timelevel variables and terms correspond to.
Nonhydrostatic formulationĀ¶
The nonhydrostatic formulation reintroduces the full vertical momentum equation and requires the solution of a 3D elliptic equations for nonhydrostatic pressure perturbation. We still integrate vertically for the hydrostatic pressure and solve a 2D elliptic equation for the surface pressure/elevation for this reduces the amount of work needed to solve for the nonhydrostatic pressure.
The momentum equations are discretized in time as follows:
which must satisfy the discreteintime depth integrated continuity, equation (2.11) and the local continuity equation
As before, the explicit predictions for momentum are consolidated as:
but this time we introduce an intermediate step by splitting the tendency of the flow as follows:
Substituting into the depth integrated continuity (equation (2.11)) gives
which is approximated by equation (2.15) on the basis that i) \(\phi_{nh}^{n+1}\) is not yet known and ii) \(\nabla \widehat{\phi}_{nh} << g \nabla \eta\). If (2.15) is solved accurately then the implication is that \(\widehat{\phi}_{nh} \approx 0\) so that the nonhydrostatic pressure field does not drive barotropic motion.
The flow must satisfy nondivergence (equation (2.50)) locally, as well as depth integrated, and this constraint is used to form a 3D elliptic equations for \(\phi_{nh}^{n+1}\):
The entire algorithm can be summarized as the sequential solution of the following equations:
where the last equation is solved by vertically integrating for \(w^{n+1}\).
Variants on the Free SurfaceĀ¶
We now describe the various formulations of the freesurface that include nonlinear forms, implicit in time using CrankNicholson, explicit and [one day] splitexplicit. First, weāll reiterate the underlying algorithm but this time using the notation consistent with the more general vertical coordinate \(r\). The elliptic equation for freesurface coordinate (units of \(r\)), corresponding to (2.11), and assuming no nonhydrostatic effects (\(\epsilon_{nh} = 0\)) is:
where
Once \({\eta}^{n+1}\) has been found, substituting into (2.2), (2.3) yields \(\vec{\bf v}^{n+1}\) if the model is hydrostatic (\(\epsilon_{nh}=0\)):
This is known as the correction step. However, when the model is nonhydrostatic (\(\epsilon_{nh}=1\)) we need an additional step and an additional equation for \(\phi'_{nh}\). This is obtained by substituting (2.47), (2.48) and (2.49) into continuity:
where
Note that \(\eta^{n+1}\) is also used to update the second RHS term \(\partial_r \dot{r}^*\) since the vertical velocity at the surface (\(\dot{r}_{surf}\)) is evaluated as \((\eta^{n+1}  \eta^n) / \Delta t\).
Finally, the horizontal velocities at the new time level are found by:
and the vertical velocity is found by integrating the continuity equation vertically. Note that, for the convenience of the restart procedure, the vertical integration of the continuity equation has been moved to the beginning of the time step (instead of at the end), without any consequence on the solution.
S/R CORRECTION_STEP
Regarding the implementation of the surface pressure solver, all computation are done within the routine SOLVE_FOR_PRESSURE and its dependent calls. The standard method to solve the 2D elliptic problem (2.64) uses the conjugate gradient method (routine CG2D); the solver matrix and conjugate gradient operator are only function of the discretized domain and are therefore evaluated separately, before the time iteration loop, within INI_CG2D. The computation of the RHS \(\eta^*\) is partly done in CALC_DIV_GHAT and in SOLVE_FOR_PRESSURE.
The same method is applied for the non hydrostatic part, using a conjugate gradient 3D solver (CG3D) that is initialized in INI_CG3D. The RHS terms of 2D and 3D problems are computed together at the same point in the code.
CrankNicolson barotropic time steppingĀ¶
The full implicit time stepping described previously is
unconditionally stable but damps the fast gravity waves, resulting in
a loss of potential energy. The modification presented now allows one
to combine an implicit part (\(\gamma,\beta\)) and an explicit
part (\(1\gamma,1\beta\)) for the surface pressure gradient
(\(\gamma\)) and for the barotropic flow divergence
(\(\beta\)). For instance, \(\gamma=\beta=1\) is the previous fully implicit
scheme; \(\gamma=\beta=1/2\) is the non damping (energy
conserving), unconditionally stable, CrankNicolson scheme;
\((\gamma,\beta)=(1,0)\) or \(=(0,1)\) corresponds to the
forward  backward scheme that conserves energy but is only stable for
small time steps. In the code, \(\gamma,\beta\) are defined as parameters,
respectively implicSurfPress, implicDiv2DFlow. They are read
from the main parameter file data
(namelist PARM01
) and are set
by default to 1,1.
Equations (2.12) ā (2.17) are modified as follows:
We set
In the hydrostatic case \(\epsilon_{nh}=0\), allowing us to find \({\eta}^{n+1}\), thus:
and then to compute (CORRECTION_STEP):
Notes:
 The RHS term of equation (2.68) corresponds the
contribution of fresh water flux (PE) to the freesurface variations
(\(\epsilon_{fw}=1\), useRealFreshWaterFlux =.TRUE. in parameter
file
data
). In order to remain consistent with the tracer equation, specially in the nonlinear freesurface formulation, this term is also affected by the CrankNicolson time stepping. The RHS reads: \(\epsilon_{fw} ( \beta (PE)^{n+1/2} + (1\beta) (PE)^{n1/2} )\)  The stability criteria with CrankNicolson time stepping for the pure
linear gravity wave problem in cartesian coordinates is:
 \(\gamma + \beta < 1\) : unstable
 \(\gamma \geq 1/2\) and \(\beta \geq 1/2\) : stable
 \(\gamma + \beta \geq 1\) : stable if \(c_{max}^2 (\gamma  1/2)(\beta  1/2) + 1 \geq 0\) with \(c_{max} = 2 \Delta t \sqrt{gH} \sqrt{ \frac{1}{\Delta x^2} + \frac{1}{\Delta y^2} }\)
 A similar mixed forward/backward timestepping is also available for
the nonhydrostatic algorithm, with a fraction \(\gamma_{nh}\)
(\(0 < \gamma_{nh} \leq 1\)) of the nonhydrostatic pressure
gradient being evaluated at time step \(n+1\) (backward in time)
and the remaining part (\(1  \gamma_{nh}\)) being evaluated at
time step \(n\) (forward in time). The runtime parameter
implicitNHPress corresponding to the implicit fraction
\(\gamma_{nh}\) of the nonhydrostatic pressure is set by default
to the implicit fraction \(\gamma\) of surface pressure
(implicSurfPress), but can also be specified independently (in
main parameter file
data
, namelistPARM01
).
Nonlinear freesurfaceĀ¶
Options have been added to the model that concern the free surface formulation.
Pressure/geopotential and free surfaceĀ¶
For the atmosphere, since \(\phi = \phi_{topo}  \int^p_{p_s} \alpha dp\), subtracting the reference state defined in section Section 1.4.1.2 :
we get:
For the ocean, the reference state is simpler since \(\rho_c\) does not dependent on \(z\) (\(b_o=g\)) and the surface reference position is uniformly \(z=0\) (\(R_o=0\)), and the same subtraction leads to a similar relation. For both fluids, using the isomorphic notations, we can write:
and rewrite as:
or:
In section Section 1.3.6, following eq. (2.69), the pressure/geopotential \(\phi'\) has been separated into surface (\(\phi_s\)), and hydrostatic anomaly (\(\phi'_{hyd}\)). In this section, the split between \(\phi_s\) and \(\phi'_{hyd}\) is made according to equation (2.70). This slightly different definition reflects the actual implementation in the code and is valid for both linear and nonlinear freesurface formulation, in both rcoordinate and r*coordinate.
Because the linear freesurface approximation ignores the tracer content of the fluid parcel between \(R_o\) and \(r_{surf}=R_o+\eta\), for consistency reasons, this part is also neglected in \(\phi'_{hyd}\) :
Note that in this case, the two definitions of \(\phi_s\) and
\(\phi'_{hyd}\) from equations (2.69) and
(2.70) converge toward the same (approximated) expressions:
\(\phi_s = \int^{r_{surf}}_{R_o} b_o dr\) and
\(\phi'_{hyd}=\int^{R_o}_r b' dr\).
On the contrary, the unapproximated formulation
(see Section 2.10.2.2) retains the full expression:
\(\phi'_{hyd} = \int^{r_{surf}}_r (b  b_o) dr\) . This is
obtained by selecting nonlinFreeSurf =4 in parameter file data
.
Regarding the surface potential:
\(b_s \simeq b_o(R_o)\) is an excellent approximation (better than the usual numerical truncation, since generally \(\eta\) is smaller than the vertical grid increment).
For the ocean, \(\phi_s = g \eta\) and \(b_s = g\) is uniform.
For the atmosphere, however, because of topographic effects, the
reference surface pressure \(R_o=p_o\) has large spatial variations
that are responsible for significant \(b_s\) variations (from 0.8 to
1.2 \([m^3/kg]\)). For this reason, when uniformLin_PhiSurf
=.FALSE. (parameter file data
, namelist PARAM01
) a nonuniform
linear coefficient \(b_s\) is used and computed (INI_LINEAR_PHISURF)
according to the reference surface pressure \(p_o\):
\(b_s = b_o(R_o) = c_p \kappa (p_o / P^o_{SL})^{(\kappa  1)} \theta_{ref}(p_o)\),
with \(P^o_{SL}\) the mean sealevel pressure.
Free surface effect on column total thickness (Nonlinear freesurface)Ā¶
The total thickness of the fluid column is \(r_{surf}  R_{fixed} = \eta + R_o  R_{fixed}\). In most applications, the free surface displacements are small compared to the total thickness \(\eta \ll H_o = R_o  R_{fixed}\). In the previous sections and in older version of the model, the linearized freesurface approximation was made, assuming \(r_{surf}  R_{fixed} \simeq H_o\) when computing horizontal transports, either in the continuity equation or in tracer and momentum advection terms. This approximation is dropped when using the nonlinear freesurface formulation and the total thickness, including the time varying part \(\eta\), is considered when computing horizontal transports. Implications for the barotropic part are presented hereafter. In section Section 2.10.2.3 consequences for tracer conservation is briefly discussed (more details can be found in Campin et al. (2004) [CAHM04]) ; the general timestepping is presented in section Section 2.10.2.4 with some limitations regarding the vertical resolution in section Section 2.10.2.5.
In the nonlinear formulation, the continuous form of the model equations remains unchanged, except for the 2D continuity equation (2.11) which is now integrated from \(R_{fixed}(x,y)\) up to \(r_{surf}=R_o+\eta\) :
Since \(\eta\) has a direct effect on the horizontal velocity (through \(\nabla_h \Phi_{surf}\)), this adds a nonlinear term to the free surface equation. Several options for the time discretization of this nonlinear part can be considered, as detailed below.
If the column thickness is evaluated at time step \(n\), and with implicit treatment of the surface potential gradient, equations (2.64) and (2.65) become:
where
This method requires us to update the solver matrix at each time step.
Alternatively, the nonlinear contribution can be evaluated fully explicitly:
This formulation allows one to keep the initial solver matrix unchanged though throughout the integration, since the nonlinear free surface only affects the RHS.
Finally, another option is a ālinearizedā formulation where the total column thickness appears only in the integral term of the RHS (2.65) but not directly in the equation (2.64).
Those different options (see Table 2.1) have been tested and show little differences. However, we recommend the use of the most precise method (nonlinFreeSurf =4) since the computation cost involved in the solver matrix update is negligible.
Parameter  Value  Description 

nonlinFreeSurf  1  linear freesurface, restart from a pickup file produced with #undef EXACT_CONSERV code 
0  linear freesurface (= default)  
4  full nonlinear freesurface  
3  same as 4 but neglecting \(\int_{R_o}^{R_o+\eta} b' dr\) in \(\Phi'_{hyd}\)  
2  same as 3 but do not update cg2d solver matrix  
1  same as 2 but treat momentum as in linear freesurface  
select_rStar  0  do not use \(r^*\) vertical coordinate (= default) 
2  use \(r^*\) vertical coordinate  
1  same as 2 but without the contribution of the slope of the coordinate in \(\nabla \Phi\) 
Tracer conservation with nonlinear freesurfaceĀ¶
To ensure global tracer conservation (i.e., the total amount) as well as local conservation, the change in the surface level thickness must be consistent with the way the continuity equation is integrated, both in the barotropic part (to find \(\eta\)) and baroclinic part (to find \(w = \dot{r}\)).
To illustrate this, consider the shallow water model, with a source of fresh water (P):
where \(h\) is the total thickness of the water column. To conserve the tracer \(\theta\) we have to discretize:
Using the implicit (nonlinear) free surface described above (Section 2.4) we have:
The discretized form of the tracer equation must adopt the same āformā in the computation of tracer fluxes, that is, the same value of \(h\), as used in the continuity equation:
The use of a 3 timelevels timestepping scheme such as the AdamsBashforth make the conservation sightly tricky. The current implementation with the AdamsBashforth timestepping provides an exact local conservation and prevents any drift in the global tracer content (Campin et al. (2004) [CAHM04]). Compared to the linear freesurface method, an additional step is required: the variation of the water column thickness (from \(h^n\) to \(h^{n+1}\)) is not incorporated directly into the tracer equation. Instead, the model uses the \(G_\theta\) terms (first step) as in the linear free surface formulation (with the āsurface correctionā turned āonā, see tracer section):
Then, in a second step, the thickness variation (expansion/reduction) is taken into account:
Note that with a simple forward time step (no AdamsBashforth), these two formulations are equivalent, since \((h^{n+1}  h^{n})/ \Delta t = P  \nabla \cdot (h^n \, \vec{\bf v}^{n+1} ) = P + \dot{r}_{surf}^{n+1}\)
Time stepping implementation of the nonlinear freesurfaceĀ¶
The grid cell thickness was hold constant with the linear freesurface; with the nonlinear freesurface, it is now varying in time, at least at the surface level. This implies some modifications of the general algorithm described earlier in sections Section 2.7 and Section 2.8.
A simplified version of the staggered in time, nonlinear freesurface algorithm is detailed hereafter, and can be compared to the equivalent linear freesurface case (eq. (2.36) to (2.46)) and can also be easily transposed to the synchronous timestepping case. Among the simplifications, salinity equation, implicit operator and detailed elliptic equation are omitted. Surface forcing is explicitly written as fluxes of temperature, fresh water and momentum, \(Q^{n+1/2}, P^{n+1/2}, F_{\bf v}^n\) respectively. \(h^n\) and \(dh^n\) are the column and grid box thickness in rcoordinate.
()Ā¶\[\phi^{n}_{hyd} = \int b(\theta^{n},S^{n},r) dr\]()Ā¶\[\vec{\bf G}_{\vec{\bf v}}^{n1/2}\hspace{2mm} = \vec{\bf G}_{\vec{\bf v}} (dh^{n1},\vec{\bf v}^{n1/2}) \hspace{+2mm};\hspace{+2mm} \vec{\bf G}_{\vec{\bf v}}^{(n)} = \frac{3}{2} \vec{\bf G}_{\vec{\bf v}}^{n1/2}  \frac{1}{2} \vec{\bf G}_{\vec{\bf v}}^{n3/2}\]()Ā¶\[\vec{\bf v}^{*} = \vec{\bf v}^{n1/2} + \Delta t \frac{dh^{n1}}{dh^{n}} \left( \vec{\bf G}_{\vec{\bf v}}^{(n)} + F_{\vec{\bf v}}^{n}/dh^{n1} \right)  \Delta t \nabla \phi_{hyd}^{n}\]\[\longrightarrow update \phantom{x} model \phantom{x} geometry : {\bf hFac}(dh^n)\]()Ā¶\[\begin{split}\begin{aligned} \eta^{n+1/2} \hspace{1mm} & = \eta^{n1/2} + \Delta t P^{n+1/2}  \Delta t \nabla \cdot \int \vec{\bf v}^{n+1/2} dh^{n} \\ & = \eta^{n1/2} + \Delta t P^{n+1/2}  \Delta t \nabla \cdot \int \!\!\! \left( \vec{\bf v}^*  g \Delta t \nabla \eta^{n+1/2} \right) dh^{n}\end{aligned}\end{split}\]()Ā¶\[\vec{\bf v}^{n+1/2}\hspace{2mm} = \vec{\bf v}^{*}  g \Delta t \nabla \eta^{n+1/2}\]()Ā¶\[h^{n+1} = h^{n} + \Delta t P^{n+1/2}  \Delta t \nabla \cdot \int \vec{\bf v}^{n+1/2} dh^{n}\]()Ā¶\[G_{\theta}^{n} = G_{\theta} ( dh^{n}, u^{n+1/2}, \theta^{n} ) \hspace{+2mm};\hspace{+2mm} G_{\theta}^{(n+1/2)} = \frac{3}{2} G_{\theta}^{n}  \frac{1}{2} G_{\theta}^{n1}\]()Ā¶\[\theta^{n+1} =\theta^{n} + \Delta t \frac{dh^n}{dh^{n+1}} \left( G_{\theta}^{(n+1/2)} +( P^{n+1/2} (\theta_{\mathrm{rain}}\theta^n) + Q^{n+1/2})/dh^n \right) \nonumber\]
Two steps have been added to linear freesurface algorithm (eq.
(2.36) to (2.46)): Firstly, the model
āgeometryā (here the hFacC,W,S) is updated just before entering
SOLVE_FOR_PRESSURE,
using the current \(dh^{n}\) field.
Secondly, the vertically integrated continuity equation
(2.76) has been added (exactConserv =.TRUE., in
parameter file data
, namelist PARM01
) just before computing the
vertical velocity, in subroutine INTEGR_CONTINUITY. Although this
equation might appear redundant with (2.74), the
integrated column thickness \(h^{n+1}\) will be different from
\(\eta^{n+1/2} + H\) in the following cases:
 when CrankNicolson timestepping is used (see Section 2.10.1).
 when filters are applied to the flow field, after (2.75), and alter the divergence of the flow.
 when the solver does not iterate until convergence; for example,
because a too large residual target was set (cg2dTargetResidual,
parameter file
data
, namelistPARM02
).
In this staggered timestepping algorithm, the momentum tendencies are computed using \(dh^{n1}\) geometry factors (2.72) and then rescaled in subroutine TIMESTEP, (2.73), similarly to tracer tendencies (see Section 2.10.2.3). The tracers are stepped forward later, using the recently updated flow field \({\bf v}^{n+1/2}\) and the corresponding model geometry \(dh^{n}\) to compute the tendencies (2.77); then the tendencies are rescaled by \(dh^n/dh^{n+1}\) to derive the new tracers values \((\theta,S)^{n+1}\) ((2.78), in subroutines CALC_GT, CALC_GS).
Note that the freshwater input is added in a consistent way in the continuity equation and in the tracer equation, taking into account the freshwater temperature \(\theta_{\mathrm{rain}}\).
Regarding the restart procedure, two 2D fields \(h^{n1}\) and \((h^nh^{n1})/\Delta t\) in addition to the standard state variables and tendencies (\(\eta^{n1/2}\), \({\bf v}^{n1/2}\), \(\theta^n\), \(S^n\), \({\bf G}_{\bf v}^{n3/2}\), \(G_{\theta,S}^{n1}\)) are stored in a āpickupā file. The model restarts reading this pickup file, then updates the model geometry according to \(h^{n1}\), and compute \(h^n\) and the vertical velocity before starting the main calling sequence (eq. (2.71) to (2.78), FORWARD_STEP).
Nonlinear freesurface and vertical resolutionĀ¶
When the amplitude of the freesurface variations becomes as large as
the vertical resolution near the surface, the surface layer thickness
can decrease to nearly zero or can even vanish completely. This later
possibility has not been implemented, and a minimum relative thickness
is imposed (hFacInf, parameter file data
, namelist PARM01
) to
prevent numerical instabilities caused by very thin surface level.
A better alternative to the vanishing level problem relies on a different vertical coordinate \(r^*\) : The time variation of the total column thickness becomes part of the \(r^*\) coordinate motion, as in a \(\sigma_{z},\sigma_{p}\) model, but the fixed part related to topography is treated as in a height or pressure coordinate model. A complete description is given in Adcroft and Campin (2004) [AC04].
The timestepping implementation of the \(r^*\) coordinate is identical to the nonlinear freesurface in \(r\) coordinate, and differences appear only in the spacial discretization.
Spatial discretization of the dynamical equationsĀ¶
Spatial discretization is carried out using the finite volume method. This amounts to a gridpoint method (namely secondorder centered finite difference) in the fluid interior but allows boundaries to intersect a regular grid allowing a more accurate representation of the position of the boundary. We treat the horizontal and vertical directions as separable and differently.
The finite volume method: finite volumes versus finite differenceĀ¶
The finite volume method is used to discretize the equations in space. The expression āfinite volumeā actually has two meanings; one is the method of embedded or intersecting boundaries (shaved or lopped cells in our terminology) and the other is nonlinear interpolation methods that can deal with nonsmooth solutions such as shocks (i.e. flux limiters for advection). Both make use of the integral form of the conservation laws to which the weak solution is a solution on each finite volume of (subdomain). The weak solution can be constructed out of piecewise constant elements or be differentiable. The differentiable equations can not be satisfied by piecewise constant functions.
As an example, the 1D constant coefficient advectiondiffusion equation:
can be discretized by integrating over finite subdomains, i.e. the lengths \(\Delta x_i\):
is exact if \(\theta(x)\) is piecewise constant over the interval \(\Delta x_i\) or more generally if \(\theta_i\) is defined as the average over the interval \(\Delta x_i\).
The flux, \(F_{i1/2}\), must be approximated:
and this is where truncation errors can enter the solution. The method for obtaining \(\overline{\theta}\) is unspecified and a wide range of possibilities exist including centered and upwind interpolation, polynomial fits based on the the volume average definitions of quantities and nonlinear interpolation such as fluxlimiters.
Choosing simple centered secondorder interpolation and differencing recovers the same ODEās resulting from finite differencing for the interior of a fluid. Differences arise at boundaries where a boundary is not positioned on a regular or smoothly varying grid. This method is used to represent the topography using lopped cell, see Adcroft et al. (1997) [AHM97]. Subtle difference also appear in more than one dimension away from boundaries. This happens because each direction is discretized independently in the finite difference method while the integrating over finite volume implicitly treats all directions simultaneously.
C grid staggering of variablesĀ¶
The basic algorithm employed for stepping forward the momentum equations is based on retaining nondivergence of the flow at all times. This is most naturally done if the components of flow are staggered in space in the form of an Arakawa C grid (Arakawa and Lamb, 1977 [AL77]).
Figure 2.5 shows the components of flow (\(u\),\(v\),\(w\)) staggered in space such that the zonal component falls on the interface between continuity cells in the zonal direction. Similarly for the meridional and vertical directions. The continuity cell is synonymous with tracer cells (they are one and the same).
Grid initialization and dataĀ¶
Initialization of grid data is controlled by subroutine INI_GRID which in calls INI_VERTICAL_GRID to initialize the vertical grid, and then either of INI_CARTESIAN_GRID, INI_SPHERICAL_POLAR_GRID or INI_CURVILINEAR_GRID to initialize the horizontal grid for cartesian, sphericalpolar or curvilinear coordinates respectively.
The reciprocals of all grid quantities are precalculated and this is done in subroutine INI_MASKS_ETC which is called later by subroutine INITIALISE_FIXED.
All grid descriptors are global arrays and stored in common blocks in
GRID.h and a generally declared as _RS
.
Horizontal gridĀ¶
The model domain is decomposed into tiles and within each tile a quasiregular grid is used. A tile is the basic unit of domain decomposition for parallelization but may be used whether parallelized or not; see section [sec:domain_decomposition] for more details. Although the tiles may be patched together in an unstructured manner (i.e. irregular or nontessilating pattern), the interior of tiles is a structured grid of quadrilateral cells. The horizontal coordinate system is orthogonal curvilinear meaning we can not necessarily treat the two horizontal directions as separable. Instead, each cell in the horizontal grid is described by the length of itās sides and itās area.
The grid information is quite general and describes any of the available coordinates systems, cartesian, sphericalpolar or curvilinear. All that is necessary to distinguish between the coordinate systems is to initialize the grid data (descriptors) appropriately.
In the following, we refer to the orientation of quantities on the computational grid using geographic terminology such as points of the compass. This is purely for convenience but should not be confused with the actual geographic orientation of model quantities.
Figure 2.6 (a) shows the tracer cell (synonymous with the continuity cell). The length of the southern edge, \(\Delta x_g\), western edge, \(\Delta y_g\) and surface area, \(A_c\), presented in the vertical are stored in arrays dxG, dyG and rA. The āgā suffix indicates that the lengths are along the defining grid boundaries. The ācā suffix associates the quantity with the cell centers. The quantities are staggered in space and the indexing is such that dxG(i,j) is positioned to the south of rA(i,j) and dyG(i,j) positioned to the west.
Figure 2.6 (b) shows the vorticity cell. The length of the southern edge, \(\Delta x_c\), western edge, \(\Delta y_c\) and surface area, \(A_\zeta\), presented in the vertical are stored in arrays dxC, dyC and rAz. The āzā suffix indicates that the lengths are measured between the cell centers and the ā\(\zeta\)ā suffix associates points with the vorticity points. The quantities are staggered in space and the indexing is such that dxC(i,j) is positioned to the north of rAz(i,j) and dyC(i,j) positioned to the east.
Figure 2.6 (c) shows the āuā or western (w) cell. The length of the southern edge, \(\Delta x_v\), eastern edge, \(\Delta y_f\) and surface area, \(A_w\), presented in the vertical are stored in arrays dxV, dyF and rAw. The āvā suffix indicates that the length is measured between the vpoints, the āfā suffix indicates that the length is measured between the (tracer) cell faces and the āwā suffix associates points with the upoints (w stands for west). The quantities are staggered in space and the indexing is such that dxV(i,j) is positioned to the south of rAw(i,j) and dyF(i,j) positioned to the east.
Figure 2.6 (d) shows the āvā or southern (s) cell. The length of the northern edge, \(\Delta x_f\), western edge, \(\Delta y_u\) and surface area, \(A_s\), presented in the vertical are stored in arrays dxF, dyU and rAs. The āuā suffix indicates that the length is measured between the upoints, the āfā suffix indicates that the length is measured between the (tracer) cell faces and the āsā suffix associates points with the vpoints (s stands for south). The quantities are staggered in space and the indexing is such that dxF(i,j) is positioned to the north of rAs(i,j) and dyU(i,j) positioned to the west.
Reciprocals of horizontal grid descriptorsĀ¶
Lengths and areas appear in the denominator of expressions as much as in the numerator. For efficiency and portability, we precalculate the reciprocal of the horizontal grid quantities so that inline divisions can be avoided.
For each grid descriptor (array) there is a reciprocal named using the
prefix recip_
. This doubles the amount of storage in GRID.h but
they are all only 2D descriptors.
Cartesian coordinatesĀ¶
Cartesian coordinates are selected when the logical flag
usingCartesianGrid in namelist PARM04
is set to true. The grid
spacing can be set to uniform via scalars dXspacing and
dYspacing in namelist PARM04
or to variable resolution by the
vectors DELX and DELY. Units are normally meters.
Nondimensional coordinates can be used by interpreting the
gravitational constant as the Rayleigh number.
Sphericalpolar coordinatesĀ¶
Spherical coordinates are selected when the logical flag
usingSphericalPolarGrid in namelist PARM04
is set to true. The
grid spacing can be set to uniform via scalars dXspacing and
dYspacing in namelist PARM04
or to variable resolution by the
vectors DELX and DELY. Units of these namelist variables are
alway degrees. The horizontal grid descriptors are calculated from these
namelist variables have units of meters.
Curvilinear coordinatesĀ¶
Curvilinear coordinates are selected when the logical flag
usingCurvilinearGrid in namelist PARM04
is set to true. The grid
spacing can not be set via the namelist. Instead, the grid descriptors
are read from data files, one for each descriptor. As for other grids,
the horizontal grid descriptors have units of meters.
Vertical gridĀ¶
As for the horizontal grid, we use the suffixes ācā and āfā to indicates faces and centers. Figure 2.7 (a) shows the default vertical grid used by the model. \(\Delta r_f\) is the difference in \(r\) (vertical coordinate) between the faces (i.e. \(\Delta r_f \equiv  \delta_k r\) where the minus sign appears due to the convention that the surface layer has index \(k=1\).).
The vertical grid is calculated in subroutine INI_VERTICAL_GRID and
specified via the vector delR in namelist PARM04
. The units of ārā
are either meters or Pascals depending on the isomorphism being used
which in turn is dependent only on the choice of equation of state.
There are alternative namelist vectors delZ and delP which dictate whether z or p coordinates are to be used but we intend to phase this out since they are redundant.
The reciprocals \(\Delta r_f^{1}\) and \(\Delta r_c^{1}\) are precalculated (also in subroutine INI_VERTICAL_GRID). All vertical grid descriptors are stored in common blocks in GRID.h.
The above grid Figure 2.7 (a) is known as the cell centered
approach because the tracer points are at cell centers; the cell centers
are midway between the cell interfaces. This discretization is selected
when the thickness of the levels are provided (delR, parameter file
data
, namelist PARM04
) An alternative, the vertex or interface
centered approach, is shown in Figure 2.7 (b). Here, the interior
interfaces are positioned midway between the tracer nodes (no longer
cell centers). This approach is formally more accurate for evaluation of
hydrostatic pressure and vertical advection but historically the cell
centered approach has been used. An alternative form of subroutine
INI_VERTICAL_GRID is used to select the interface centered approach
This form requires to specify \(Nr+1\) vertical distances delRc
(parameter file data
, namelist PARM04
, e.g.
ideal_2D_oce/input/data) corresponding to surface to
center, \(Nr1\) center to center, and center to bottom distances.
Topography: partially filled cellsĀ¶
Adcroft et al. (1997) [AHM97] presented two alternatives to the stepwise finite difference representation of topography. The method is known to the engineering community as intersecting boundary method. It involves allowing the boundary to intersect a grid of cells thereby modifying the shape of those cells intersected. We suggested allowing the topography to take on a piecewise linear representation (shaved cells) or a simpler piecewise constant representation (partial step). Both show dramatic improvements in solution compared to the traditional full step representation, the piecewise linear being the best. However, the storage requirements are excessive so the simpler piecewise constant or partialstep method is all that is currently supported.
Figure 2.8 shows a schematic of the xr plane indicating how the thickness of a level is determined at tracer and u points. The physical thickness of a tracer cell is given by \(h_c(i,j,k) \Delta r_f(k)\) and the physical thickness of the open side is given by \(h_w(i,j,k) \Delta r_f(k)\). Three 3D descriptors \(h_c\), \(h_w\) and \(h_s\) are used to describe the geometry: hFacC, hFacW and hFacS respectively. These are calculated in subroutine INI_MASKS_ETC along with there reciprocals recip_hFacC, recip_hFacW and recip_hFacS.
The nondimensional fractions (or hfacs as we call them) are calculated from the model depth array and then processed to avoid tiny volumes. The rule is that if a fraction is less than hFacMin then it is rounded to the nearer of \(0\) or hFacMin or if the physical thickness is less than hFacMinDr then it is similarly rounded. The larger of the two methods is used when there is a conflict. By setting hFacMinDr equal to or larger than the thinnest nominal layers, \(\min{(\Delta z_f)}\), but setting hFacMin to some small fraction then the model will only lop thick layers but retain stability based on the thinnest unlopped thickness; \(\min{(\Delta z_f,hFacMinDr)}\).
S/R :filelink:INI_MASKS_ETC
Continuity and horizontal pressure gradient termĀ¶
The core algorithm is based on the āC gridā discretization of the continuity equation which can be summarized as:
where the continuity equation has been most naturally discretized by staggering the three components of velocity as shown in Figure 2.5. The grid lengths \(\Delta x_c\) and \(\Delta y_c\) are the lengths between tracer points (cell centers). The grid lengths \(\Delta x_g\), \(\Delta y_g\) are the grid lengths between cell corners. \(\Delta r_f\) and \(\Delta r_c\) are the distance (in units of \(r\)) between level interfaces (wlevel) and level centers (tracer level). The surface area presented in the vertical is denoted \({\cal A}_c\). The factors \(h_w\) and \(h_s\) are nondimensional fractions (between 0 and 1) that represent the fraction cell depth that is āopenā for fluid flow.
The last equation, the discrete continuity equation, can be summed in the vertical to yield the freesurface equation:
The source term \(PE\) on the rhs of continuity accounts for the local addition of volume due to excess precipitation and runoff over evaporation and only enters the toplevel of the ocean model.
Hydrostatic balanceĀ¶
The vertical momentum equation has the hydrostatic or quasihydrostatic balance on the right hand side. This discretization guarantees that the conversion of potential to kinetic energy as derived from the buoyancy equation exactly matches the form derived from the pressure gradient terms when forming the kinetic energy equation.
In the ocean, using zcoordinates, the hydrostatic balance terms are discretized:
In the atmosphere, using pcoordinates, hydrostatic balance is discretized:
where \(\Delta \Pi\) is the difference in Exner function between the pressure points. The nonhydrostatic equations are not available in the atmosphere.
The difference in approach between ocean and atmosphere occurs because of the direct use of the ideal gas equation in forming the potential energy conversion term \(\alpha \omega\). Because of the different representation of hydrostatic balance between ocean and atmosphere there is no elegant way to represent both systems using an arbitrary coordinate.
The integration for hydrostatic pressure is made in the positive \(r\) direction (increasing kindex). For the ocean, this is from the freesurface down and for the atmosphere this is from the ground up.
The calculations are made in the subroutine CALC_PHI_HYD. Inside this routine, one of other of the atmospheric/oceanic form is selected based on the string variable buoyancyRelation.
Fluxform momentum equationsĀ¶
The original finite volume model was based on the Eulerian flux form momentum equations. This is the default though the vector invariant form is optionally available (and recommended in some cases).
The āGāsā (our colloquial name for all terms on rhs!) are broken into the various advective, Coriolis, horizontal dissipation, vertical dissipation and metric forces:
In the hydrostatic limit, \(G_w=0\) and \(\epsilon_{nh}=0\), reducing the vertical momentum to hydrostatic balance.
These terms are calculated in routines called from subroutine MOM_FLUXFORM and collected into the global arrays gU, gV, and gW.
Advection of momentumĀ¶
The advective operator is second order accurate in space:
and because of the flux form does not contribute to the global budget of linear momentum. The quantities \(U\), \(V\) and \(W\) are volume fluxes defined:
The advection of momentum takes the same form as the advection of tracers but by a translated advective flow. Consequently, the conservation of second moments, derived for tracers later, applies to \(u^2\) and \(v^2\) and \(w^2\) so that advection of momentum correctly conserves kinetic energy.
Coriolis termsĀ¶
The āpure C gridā Coriolis terms (i.e. in absence of CD scheme) are discretized:
where the Coriolis parameters \(f\) and \(f'\) are defined:
where \(\varphi\) is geographic latitude when using spherical geometry, otherwise the \(\beta\)plane definition is used:
This discretization globally conserves kinetic energy. It should be noted that despite the use of this discretization in former publications, all calculations to date have used the following different discretization:
where the subscripts on \(f\) and \(f'\) indicate evaluation of the Coriolis parameters at the appropriate points in space. The above discretization does not conserve anything, especially energy, but for historical reasons is the default for the code. A flag controls this discretization: set runtime integer selectCoriScheme to two (=2) (which otherwise defaults to zero) to select the energyconserving conserving form (2.95), (2.96), and (2.97) above.
Curvature metric termsĀ¶
The most commonly used coordinate system on the sphere is the geographic system \((\lambda,\varphi)\). The curvilinear nature of these coordinates on the sphere lead to some āmetricā terms in the component momentum equations. Under the thinatmosphere and hydrostatic approximations these terms are discretized:
where \(a\) is the radius of the planet (sphericity is assumed) or the radial distance of the particle (i.e. a function of height). It is easy to see that this discretization satisfies all the properties of the discrete Coriolis terms since the metric factor \(\frac{u}{a} \tan{\varphi}\) can be viewed as a modification of the vertical Coriolis parameter: \(f \rightarrow f+\frac{u}{a} \tan{\varphi}\).
However, as for the Coriolis terms, a nonenergy conserving form has exclusively been used to date:
where \(\tan{\varphi}\) is evaluated at the \(u\) and \(v\) points respectively.
Nonhydrostatic metric termsĀ¶
For the nonhydrostatic equations, dropping the thinatmosphere approximation reintroduces metric terms involving \(w\) which are required to conserve angular momentum:
Because we are always consistent, even if consistently wrong, we have, in the past, used a different discretization in the model which is:
Lateral dissipationĀ¶
Historically, we have represented the SGS Reynolds stresses as simply down gradient momentum fluxes, ignoring constraints on the stress tensor such as symmetry.
The lateral viscous stresses are discretized:
where the nondimensional factors \(c_{lm\Delta^n}(\varphi), \{l,m,n\} \in \{1,2\}\) define the ācosineā scaling with latitude which can be applied in various adhoc ways. For instance, \(c_{11\Delta} = c_{21\Delta} = (\cos{\varphi})^{3/2}\), \(c_{12\Delta}=c_{22\Delta}=1\) would represent the anisotropic cosine scaling typically used on the ālatlonā grid for Laplacian viscosity.
It should be noted that despite the adhoc nature of the scaling, some scaling must be done since on a latlon grid the converging meridians make it very unlikely that a stable viscosity parameter exists across the entire model domain.
The Laplacian viscosity coefficient, \(A_h\) (viscAh), has units of \(m^2 s^{1}\). The biharmonic viscosity coefficient, \(A_4\) (viscA4), has units of \(m^4 s^{1}\).
Two types of lateral boundary condition exist for the lateral viscous terms, noslip and freeslip.
The freeslip condition is most convenient to code since it is equivalent to zerostress on boundaries. Simple masking of the stress components sets them to zero. The fractional open stress is properly handled using the lopped cells.
The noslip condition defines the normal gradient of a tangential flow such that the flow is zero on the boundary. Rather than modify the stresses by using complicated functions of the masks and āghostā points (see Adcroft and Marshall (1998) [AM98]) we add the boundary stresses as an additional source term in cells next to solid boundaries. This has the advantage of being able to cope with āthin wallsā and also makes the interior stress calculation (code) independent of the boundary conditions. The ābodyā force takes the form:
In fact, the above discretization is not quite complete because it assumes that the bathymetry at velocity points is deeper than at neighboring vorticity points, e.g. \(1h_w < 1h_\zeta\)
Vertical dissipationĀ¶
Vertical viscosity terms are discretized with only partial adherence to the variable grid lengths introduced by the finite volume formulation. This reduces the formal accuracy of these terms to just first order but only next to boundaries; exactly where other terms appear such as linear and quadratic bottom drag.
represents the general discrete form of the vertical dissipation terms.
In the interior the vertical stresses are discretized:
It should be noted that in the nonhydrostatic form, the stress tensor is even less consistent than for the hydrostatic (see Wajsowicz (1993) [Waj93]). It is well known how to do this properly (see Griffies and Hallberg (2000) [GH00]) and is on the list of todoās.
As for the lateral viscous terms, the freeslip condition is equivalent to simply setting the stress to zero on boundaries. The noslip condition is implemented as an additional term acting on top of the interior and freeslip stresses. Bottom drag represents additional friction, in addition to that imposed by the noslip condition at the bottom. The drag is cast as a stress expressed as a linear or quadratic function of the mean flow in the layer above the topography:
where these terms are only evaluated immediately above topography. \(r_b\) (bottomDragLinear) has units of \(m s^{1}\) and a typical value of the order 0.0002 \(m s^{1}\). \(C_d\) (bottomDragQuadratic) is dimensionless with typical values in the range 0.001ā0.003.
S/R MOM_U_BOTTOMDRAG, MOM_V_BOTTOMDRAG
\(\tau_{13}^{bottomdrag} / \Delta r_f , \tau_{23}^{bottomdrag} / \Delta r_f\) : vF ( local to MOM_FLUXFORM.F )
Derivation of discrete energy conservationĀ¶
These discrete equations conserve kinetic plus potential energy using the following definitions:
Mom DiagnosticsĀ¶

<Name>Levs<parsing code>< Units >< Tile (max=80c)

VISCAHZ  15 SZ MR m^2/s Harmonic Visc Coefficient (m2/s) (Zeta Pt)
VISCA4Z  15 SZ MR m^4/s Biharmonic Visc Coefficient (m4/s) (Zeta Pt)
VISCAHD  15 SM MR m^2/s Harmonic Viscosity Coefficient (m2/s) (Div Pt)
VISCA4D  15 SM MR m^4/s Biharmonic Viscosity Coefficient (m4/s) (Div Pt)
VAHZMAX  15 SZ MR m^2/s CFLMAX Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZMAX  15 SZ MR m^4/s CFLMAX Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDMAX  15 SM MR m^2/s CFLMAX Harm Visc Coefficient (m2/s) (Div Pt)
VA4DMAX  15 SM MR m^4/s CFLMAX Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZMIN  15 SZ MR m^2/s REMIN Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZMIN  15 SZ MR m^4/s REMIN Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDMIN  15 SM MR m^2/s REMIN Harm Visc Coefficient (m2/s) (Div Pt)
VA4DMIN  15 SM MR m^4/s REMIN Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZLTH  15 SZ MR m^2/s Leith Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZLTH  15 SZ MR m^4/s Leith Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDLTH  15 SM MR m^2/s Leith Harm Visc Coefficient (m2/s) (Div Pt)
VA4DLTH  15 SM MR m^4/s Leith Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZLTHD 15 SZ MR m^2/s LeithD Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZLTHD 15 SZ MR m^4/s LeithD Biharm Visc Coefficient (m4/s) (Zeta Pt)
VAHDLTHD 15 SM MR m^2/s LeithD Harm Visc Coefficient (m2/s) (Div Pt)
VA4DLTHD 15 SM MR m^4/s LeithD Biharm Visc Coefficient (m4/s) (Div Pt)
VAHZSMAG 15 SZ MR m^2/s Smagorinsky Harm Visc Coefficient (m2/s) (Zeta Pt)
VA4ZSMAG 15 SZ MR m^4/s Smagorinsky Biharm Visc Coeff. (m4/s) (Zeta Pt)
VAHDSMAG 15 SM MR m^2/s Smagorinsky Harm Visc Coefficient (m2/s) (Div Pt)
VA4DSMAG 15 SM MR m^4/s Smagorinsky Biharm Visc Coeff. (m4/s) (Div Pt)
momKE  15 SM MR m^2/s^2 Kinetic Energy (in momentum Eq.)
momHDiv  15 SM MR s^1 Horizontal Divergence (in momentum Eq.)
momVort3 15 SZ MR s^1 3rd component (vertical) of Vorticity
Strain  15 SZ MR s^1 Horizontal Strain of Horizontal Velocities
Tension  15 SM MR s^1 Horizontal Tension of Horizontal Velocities
UBotDrag 15 UU 129MR m/s^2 U momentum tendency from Bottom Drag
VBotDrag 15 VV 128MR m/s^2 V momentum tendency from Bottom Drag
USidDrag 15 UU 131MR m/s^2 U momentum tendency from Side Drag
VSidDrag 15 VV 130MR m/s^2 V momentum tendency from Side Drag
Um_Diss  15 UU 133MR m/s^2 U momentum tendency from Dissipation
Vm_Diss  15 VV 132MR m/s^2 V momentum tendency from Dissipation
Um_Advec 15 UU 135MR m/s^2 U momentum tendency from Advection terms
Vm_Advec 15 VV 134MR m/s^2 V momentum tendency from Advection terms
Um_Cori  15 UU 137MR m/s^2 U momentum tendency from Coriolis term
Vm_Cori  15 VV 136MR m/s^2 V momentum tendency from Coriolis term
Um_Ext  15 UU 137MR m/s^2 U momentum tendency from external forcing
Vm_Ext  15 VV 138MR m/s^2 V momentum tendency from external forcing
Um_AdvZ3 15 UU 141MR m/s^2 U momentum tendency from Vorticity Advection
Vm_AdvZ3 15 VV 140MR m/s^2 V momentum tendency from Vorticity Advection
Um_AdvRe 15 UU 143MR m/s^2 U momentum tendency from vertical Advection (Explicit part)
Vm_AdvRe 15 VV 142MR m/s^2 V momentum tendency from vertical Advection (Explicit part)
ADVx_Um  15 UM 145MR m^4/s^2 Zonal Advective Flux of U momentum
ADVy_Um  15 VZ 144MR m^4/s^2 Meridional Advective Flux of U momentum
ADVrE_Um 15 WU LR m^4/s^2 Vertical Advective Flux of U momentum (Explicit part)
ADVx_Vm  15 UZ 148MR m^4/s^2 Zonal Advective Flux of V momentum
ADVy_Vm  15 VM 147MR m^4/s^2 Meridional Advective Flux of V momentum
ADVrE_Vm 15 WV LR m^4/s^2 Vertical Advective Flux of V momentum (Explicit part)
VISCx_Um 15 UM 151MR m^4/s^2 Zonal Viscous Flux of U momentum
VISCy_Um 15 VZ 150MR m^4/s^2 Meridional Viscous Flux of U momentum
VISrE_Um 15 WU LR m^4/s^2 Vertical Viscous Flux of U momentum (Explicit part)
VISrI_Um 15 WU LR m^4/s^2 Vertical Viscous Flux of U momentum (Implicit part)
VISCx_Vm 15 UZ 155MR m^4/s^2 Zonal Viscous Flux of V momentum
VISCy_Vm 15 VM 154MR m^4/s^2 Meridional Viscous Flux of V momentum
VISrE_Vm 15 WV LR m^4/s^2 Vertical Viscous Flux of V momentum (Explicit part)
VISrI_Vm 15 WV LR m^4/s^2 Vertical Viscous Flux of V momentum (Implicit part)
Vector invariant momentum equationsĀ¶
The finite volume method lends itself to describing the continuity and tracer equations in curvilinear coordinate systems. However, in curvilinear coordinates many new metric terms appear in the momentum equations (written in Lagrangian or fluxform) making generalization far from elegant. Fortunately, an alternative form of the equations, the vector invariant equations are exactly that; invariant under coordinate transformations so that they can be applied uniformly in any orthogonal curvilinear coordinate system such as spherical coordinates, boundary following or the conformal spherical cube system.
The nonhydrostatic vector invariant equations read:
which describe motions in any orthogonal curvilinear coordinate system. Here, \(B\) is the Bernoulli function and \(\vec{\zeta}=\nabla \wedge \vec{v}\) is the vorticity vector. We can take advantage of the elegance of these equations when discretizing them and use the discrete definitions of the grad, curl and divergence operators to satisfy constraints. We can also consider the analogy to forming derived equations, such as the vorticity equation, and examine how the discretization can be adjusted to give suitable vorticity advection among other things.
The underlying algorithm is the same as for the flux form equations. All that has changed is the contents of the āGāsā. For the timebeing, only the hydrostatic terms have been coded but we will indicate the points where nonhydrostatic contributions will enter:
Relative vorticityĀ¶
The vertical component of relative vorticity is explicitly calculated and use in the discretization. The particular form is crucial for numerical stability; alternative definitions break the conservation properties of the discrete equations.
Relative vorticity is defined:
where \({\cal A}_\zeta\) is the area of the vorticity cell presented in the vertical and \(\Gamma\) is the circulation about that cell.
Kinetic energyĀ¶
The kinetic energy, denoted \(KE\), is defined:
S/R MOM_CALC_KE
Coriolis termsĀ¶
The potential enstrophy conserving form of the linear Coriolis terms are written:
Here, the Coriolis parameter \(f\) is defined at vorticity (corner) points.
The potential enstrophy conserving form of the nonlinear Coriolis terms are written:
The Coriolis terms can also be evaluated together and expressed in terms of absolute vorticity \(f+\zeta_3\). The potential enstrophy conserving form using the absolute vorticity is written:
The distinction between using absolute vorticity or relative vorticity is useful when constructing higher order advection schemes; monotone advection of relative vorticity behaves differently to monotone advection of absolute vorticity. Currently the choice of relative/absolute vorticity, centered/upwind/high order advection is available only through commented subroutine calls.
Shear termsĀ¶
The shear terms (\(\zeta_2w\) and \(\zeta_1w\)) are are discretized to guarantee that no spurious generation of kinetic energy is possible; the horizontal gradient of Bernoulli function has to be consistent with the vertical advection of shear:
Gradient of Bernoulli functionĀ¶
Horizontal divergenceĀ¶
The horizontal divergence, a complimentary quantity to relative vorticity, is used in parameterizing the Reynolds stresses and is discretized:
S/R MOM_CALC_KE
Horizontal dissipationĀ¶
The following discretization of horizontal dissipation conserves potential vorticity (thickness weighted relative vorticity) and divergence and dissipates energy, enstrophy and divergence squared:
where
S/R MOM_VI_HDISSIP
Vertical dissipationĀ¶
Currently, this is exactly the same code as the flux form equations.
represents the general discrete form of the vertical dissipation terms.
In the interior the vertical stresses are discretized:
Tracer equationsĀ¶
The basic discretization used for the tracer equations is the second order piecewise constant finite volume form of the forced advectiondiffusion equations. There are many alternatives to second order method for advection and alternative parameterizations for the subgrid scale processes. The GentMcWilliams eddy parameterization, KPP mixing scheme and PV flux parameterization are all dealt with in separate sections. The basic discretization of the advectiondiffusion part of the tracer equations and the various advection schemes will be described here.
Timestepping of tracers: ABIIĀ¶
The default advection scheme is the centered second order method which requires a second order or quasisecond order timestepping scheme to be stable. Historically this has been the quasisecond order AdamsBashforth method (ABII) and applied to all terms. For an arbitrary tracer, \(\tau\), the forced advectiondiffusion equation reads:
where \(G_{adv}^\tau\), \(G_{diff}^\tau\) and \(G_{forc}^\tau\) are the tendencies due to advection, diffusion and forcing, respectively, namely:
and the forcing can be some arbitrary function of state, time and space.
The term, \(\tau \nabla \cdot {\bf v}\), is required to retain local conservation in conjunction with the linear implicit freesurface. It only affects the surface layer since the flow is nondivergent everywhere else. This term is therefore referred to as the surface correction term. Global conservation is not possible using the fluxform (as here) and a linearized freesurface (Griffies and Hallberg (2000) [GH00] , Campin et al. (2004) [CAHM04]).
The continuity equation can be recovered by setting \(G_{diff}=G_{forc}=0\) and \(\tau=1\).
The driver routine that calls the routines to calculate tendencies are CALC_GT and CALC_GS for temperature and salt (moisture), respectively. These in turn call a generic advection diffusion routine GAD_CALC_RHS that is called with the flow field and relevant tracer as arguments and returns the collective tendency due to advection and diffusion. Forcing is add subsequently in CALC_GT or CALC_GS to the same tendency array.
S/R GAD_CALC_RHS
The space and time discretization are treated separately (method of lines). Tendencies are calculated at time levels \(n\) and \(n1\) and extrapolated to \(n+1/2\) using the AdamsBashforth method:
where \(G^{(n)} = G_{adv}^\tau + G_{diff}^\tau + G_{src}^\tau\) at time step \(n\). The tendency at \(n1\) is not recalculated but rather the tendency at \(n\) is stored in a global array for later reuse.
S/R ADAMS_BASHFORTH2
The tracers are stepped forward in time using the extrapolated tendency:
S/R TIMESTEP_TRACER
Strictly speaking the ABII scheme should be applied only to the advection terms. However, this scheme is only used in conjunction with the standard second, third and fourth order advection schemes. Selection of any other advection scheme disables AdamsBashforth for tracers so that explicit diffusion and forcing use the forward method.
Advection schemesĀ¶
Linear advection schemesĀ¶
The advection schemes known as centered second order, centered fourth order, first order upwind and upwind biased third order are known as linear advection schemes because the coefficient for interpolation of the advected tracer are linear and a function only of the flow, not the tracer field it self. We discuss these first since they are most commonly used in the field and most familiar.
Centered second order advectiondiffusionĀ¶
The basic discretization, centered second order, is the default. It is designed to be consistent with the continuity equation to facilitate conservation properties analogous to the continuum. However, centered second order advection is notoriously noisy and must be used in conjunction with some finite amount of diffusion to produce a sensible solution.
The advection operator is discretized:
where the area integrated fluxes are given by:
The quantities \(U\), \(V\) and \(W\) are volume fluxes. defined as:
For nondivergent flow, this discretization can be shown to conserve the tracer both locally and globally and to globally conserve tracer variance, \(\tau^2\). The proof is given in Adcroft (1995) [Adc95] and Adcroft et al. (1997) [AHM97] .
S/R GAD_C2_ADV_X
S/R GAD_C2_ADV_Y
S/R GAD_C2_ADV_R
Third order upwind bias advectionĀ¶
Upwind biased third order advection offers a relatively good compromise between accuracy and smoothness. It is not a āpositiveā scheme meaning false extrema are permitted but the amplitude of such are significantly reduced over the centered second order method.
The third order upwind fluxes are discretized:
At boundaries, \(\delta_{\hat{n}} \tau\) is set to zero allowing \(\delta_{nn}\) to be evaluated. We are currently examine the accuracy of this boundary condition and the effect on the solution.
S/R GAD_U3_ADV_X
S/R GAD_U3_ADV_Y
S/R GAD_U3_ADV_R
Centered fourth order advectionĀ¶
Centered fourth order advection is formally the most accurate scheme we have implemented and can be used to great effect in high resolution simulations where dynamical scales are well resolved. However, the scheme is noisy, like the centered second order method, and so must be used with some finite amount of diffusion. Biharmonic is recommended since it is more scale selective and less likely to diffuse away the well resolved gradient the fourth order scheme worked so hard to create.
The centered fourth order fluxes are discretized:
As for the third order scheme, the best discretization near boundaries is under investigation but currently \(\delta_i \tau=0\) on a boundary.
S/R GAD_C4_ADV_X
S/R GAD_C4_ADV_Y
S/R GAD_C4_ADV_R
First order upwind advectionĀ¶
Although the upwind scheme is the underlying scheme for the robust or nonlinear methods given in Section 2.17.2, we havenāt actually implemented this method for general use. It would be very diffusive and it is unlikely that it could ever produce more useful results than the positive higher order schemes.
Upwind bias is introduced into many schemes using the abs function and it allows the first order upwind flux to be written:
If for some reason the above method is desired, the second order flux limiter scheme described in Section 2.17.2.1 reduces to the above scheme if the limiter is set to zero.
Nonlinear advection schemesĀ¶
Nonlinear advection schemes invoke nonlinear interpolation and are widely used in computational fluid dynamics (nonlinear does not refer to the nonlinearity of the advection operator). The flux limited advection schemes belong to the class of finite volume methods which neatly ties into the spatial discretization of the model.
When employing the flux limited schemes, first order upwind or directspacetime method, the timestepping is switched to forward in time.
Second order flux limitersĀ¶
The second order flux limiter method can be cast in several ways but is generally expressed in terms of other flux approximations. For example, in terms of a first order upwind flux and second order LaxWendroff flux, the limited flux is given as:
where \(\psi(r)\) is the limiter function,
is the upwind flux,
is the LaxWendroff flux and \(c = \frac{u \Delta t}{\Delta x}\) is the Courant (CFL) number.
The limiter function, \(\psi(r)\), takes the slope ratio
as its argument. There are many choices of limiter function but we only provide the Superbee limiter (Roe 1995 [Roe85]):
Third order direct spacetimeĀ¶
The direct spacetime method deals with space and time discretization together (other methods that treat space and time separately are known collectively as the āMethod of Linesā). The LaxWendroff scheme falls into this category; it adds sufficient diffusion to a second order flux that the forwardintime method is stable. The upwind biased third order DST scheme is:
where
The coefficients \(d_0\) and \(d_1\) approach \(1/3\) and \(1/6\) respectively as the Courant number, \(c\), vanishes. In this limit, the conventional third order upwind method is recovered. For finite Courant number, the deviations from the linear method are analogous to the diffusion added to centered second order advection in the LaxWendroff scheme.
The DST3 method described above must be used in a forwardintime manner and is stable for \(0 \le c \le 1\). Although the scheme appears to be forwardintime, it is in fact third order in time and the accuracy increases with the Courant number! For low Courant number, DST3 produces very similar results (indistinguishable in Figure 2.10) to the linear third order method but for large Courant number, where the linear upwind third order method is unstable, the scheme is extremely accurate (Figure 2.11) with only minor overshoots.
S/R GAD_DST3_ADV_X
S/R GAD_DST3_ADV_Y
S/R GAD_DST3_ADV_R
Third order direct spacetime with flux limitingĀ¶
The overshoots in the DST3 method can be controlled with a flux limiter. The limited flux is written:
where
and the limiter is the Sweby limiter:
S/R GAD_DST3FL_ADV_X
S/R GAD_DST3FL_ADV_Y
S/R GAD_DST3FL_ADV_R
Multidimensional advectionĀ¶
In many of the aforementioned advection schemes the behavior in multiple dimensions is not necessarily as good as the one dimensional behavior. For instance, a shape preserving monotonic scheme in one dimension can have severe shape distortion in two dimensions if the two components of horizontal fluxes are treated independently. There is a large body of literature on the subject dealing with this problem and among the fixes are operator and flux splitting methods, corner flux methods, and more. We have adopted a variant on the standard splitting methods that allows the flux calculations to be implemented as if in one dimension:
In order to incorporate this method into the general model algorithm, we compute the effective tendency rather than update the tracer so that other terms such as diffusion are using the \(n\) timelevel and not the updated \(n+3/3\) quantities:
So that the over all timestepping looks likes:
S/R GAD_ADVECTION
A schematic of multidimension time stepping for the cube sphere configuration is show in Figure 2.9 .
Comparison of advection schemesĀ¶
Table 2.2 shows a summary of the different advection schemes available in MITgcm.
āABā stands for AdamsBashforth and āDSTā for direct spacetime. The code corresponds to the number used
to select the corresponding advection scheme in the parameter file (e.g., tempAdvScheme=3
in file
data
selects the 3rd order upwind advection scheme for temperature advection).
Advection Scheme  Code  Use AB?  Use multidim?  Stencil (1D)  Comments 

1st order upwind  1  no  yes^{*}  3  linear \(\tau\), nonlinear \(\vec{v}\) 
centered 2nd order  2  yes  no  3  linear 
3rd order upwind  3  yes  no  5  linear \(\tau\) 
centered 4th order  4  yes  no  5  linear 
2nd order DST (LaxWendroff)  20  no  yes^{*}  3  linear \(\tau\), nonlinear \(\vec{v}\) 
3rd order DST  30  no  yes^{*}  5  linear \(\tau\), nonlinear \(\vec{v}\) 
2nd order flux limiters  77  no  yes^{*}  5  nonlinear 
3rd order DST flux limiter  33  no  yes^{*}  5  nonlinear 
piecewise parabolic w/ānullā limiter  40  no  yes  7  nonlinear 
piecewise parabolic w/āmonoā limiter  41  no  yes  7  nonlinear 
piecewise parabolic w/āwenoā limiter  42  no  yes  7  nonlinear 
piecewise quartic w/ānullā limiter  50  no  yes  9  nonlinear 
piecewise quartic w/āmonoā limiter  51  no  yes  9  nonlinear 
piecewise quartic w/āwenoā limiter  52  no  yes  9  nonlinear 
7th order onestep method w/monotonicity preserving limiter  7  no  yes  9  nonlinear 
second ordermoment Prather  80  no  yes  3  nonlinear 
second ordermoment Prather w/limiter  81  no  yes  3  nonlinear 
yes^{*} indicates that either the multidim advection algorithm or standard approach can be utilized, controlled by a namelist parameter multiDimAdvection (in these cases, given that these schemes was designed to use multidim advection, using the standard approach is not recommended). The minimum size of the required tile overlap region (OLx, OLx) is (stencil size 1)/2. The minimum overlap required by the model in general is 2, so for some of the above choices the advection scheme will not cost anything in terms of an additional overlap requirement, but especially given a small tile size, using scheme 7 for example would require costly additional overlap points (note a cube sphere grid with a āwetcornerā requires doubling this overlap!) In the āCommentsā column, \(\tau\) refers to tracer advection, \(\vec{v}\) momentum advection.
Shown in Figure 2.10 and Figure 2.11 is a 1D comparison of advection schemes. Here we advect both a smooth hill and a hill with a more abrupt shock. Figure 2.10 shown the result for a weak flow (low Courant number) whereas Figure 2.11 shows the result for a stronger flow (high Courant number).
Figure 2.12, Figure 2.13 and Figure 2.14 show solutions to a simple diagonal advection problem using a selection of schemes for low, moderate and high Courant numbers, respectively. The top row shows the linear schemes, integrated with the AdamsBashforth method. Theses schemes are clearly unstable for the high Courant number and weakly unstable for the moderate Courant number. The presence of false extrema is very apparent for all Courant numbers. The middle row shows solutions obtained with the unlimited but multidimensional schemes. These solutions also exhibit false extrema though the pattern now shows symmetry due to the multidimensional scheme. Also, the schemes are stable at high Courant number where the linear schemes werenāt. The bottom row (left and middle) shows the limited schemes and most obvious is the absence of false extrema. The accuracy and stability of the unlimited nonlinear schemes is retained at high Courant number but at low Courant number the tendency is to lose amplitude in sharp peaks due to diffusion. The one dimensional tests shown in Figure 2.10 and Figure 2.11 show this phenomenon.
Finally, the bottom left and right panels use the same advection scheme but the right does not use the multidimensional method. At low Courant number this appears to not matter but for moderate Courant number severe distortion of the feature is apparent. Moreover, the stability of the multidimensional scheme is determined by the maximum Courant number applied of each dimension while the stability of the method of lines is determined by the sum. Hence, in the high Courant number plot, the scheme is unstable.
With many advection schemes implemented in the code two questions arise: āWhich scheme is best?ā and āWhy donāt you just offer the best advection scheme?ā. Unfortunately, no one advection scheme is āthe bestā for all particular applications and for new applications it is often a matter of trial to determine which is most suitable. Here are some guidelines but these are not the rule;
 If you have a coarsely resolved model, using a positive or upwind biased scheme will introduce significant diffusion to the solution and using a centered higher order scheme will introduce more noise. In this case, simplest may be best.
 If you have a high resolution model, using a higher order scheme will give a more accurate solution but scaleselective diffusion might need to be employed. The flux limited methods offer similar accuracy in this regime.
 If your solution has shocks or propagating fronts then a flux limited scheme is almost essential.
 If your timestep is limited by advection, the multidimensional nonlinear schemes have the most stability (up to Courant number 1).
 If you need to know how much diffusion/dissipation has occurred you will have a lot of trouble figuring it out with a nonlinear method.
 The presence of false extrema is nonphysical and this alone is the strongest argument for using a positive scheme.
Shapiro FilterĀ¶
The Shapiro filter (Shapiro 1970) [Sha70] is a high order horizontal filter that efficiently remove small scale grid noise without affecting the physical structures of a field. It is applied at the end of the time step on both velocity and tracer fields.
Three different space operators are considered here (S1,S2 and S4). They differ essentially by the sequence of derivative in both X and Y directions. Consequently they show different damping response function specially in the diagonal directions X+Y and XY.
Space derivatives can be computed in the real space, taking into account the grid spacing. Alternatively, a pure computational filter can be defined, using pure numerical differences and ignoring grid spacing. This later form is stable whatever the grid is, and therefore specially useful for highly anisotropic grid such as spherical coordinate grid. A damping timescale parameter \(\tau_{shap}\) defines the strength of the filter damping.
The three computational filter operators are :
In addition, the S2 operator can easily be extended to a physical space filter:
with the Laplacian operator \(\overline{\nabla}^2\) and a length scale parameter \(L_{shap}\). The stability of this S2g filter requires \(L_{shap} < \mathrm{Min}^{(Global)}(\Delta x,\Delta y)\).
SHAP DiagnosticsĀ¶

<Name>Levsparsing code<Units>< Tile (max=80c)

SHAP_dT  5 SM MR K/s Temperature Tendency due to Shapiro Filter
SHAP_dS  5 SM MR g/kg/s Specific Humidity Tendency due to Shapiro Filter
SHAP_dU  5 UU 148MR m/s^2 Zonal Wind Tendency due to Shapiro Filter
SHAP_dV  5 VV 147MR m/s^2 Meridional Wind Tendency due to Shapiro Filter
Nonlinear Viscosities for Large Eddy SimulationĀ¶
In Large Eddy Simulations (LES), a turbulent closure needs to be provided that accounts for the effects of subgridscale motions on the large scale. With sufficiently powerful computers, we could resolve the entire flow down to the molecular viscosity scales (\(L_{\nu}\approx 1 \rm cm\)). Current computation allows perhaps four decades to be resolved, so the largest problem computationally feasible would be about 10m. Most oceanographic problems are much larger in scale, so some form of LES is required, where only the largest scales of motion are resolved, and the subgridscale effects on the largescale are parameterized.
To formalize this process, we can introduce a filter over the subgridscale L: \(u_\alpha\rightarrow \overline{u_\alpha}\) and L: \(b\rightarrow \overline{b}\). This filter has some intrinsic length and time scales, and we assume that the flow at that scale can be characterized with a single velocity scale (\(V\)) and vertical buoyancy gradient (\(N^2\)). The filtered equations of motion in a local Mercator projection about the gridpoint in question (see Appendix for notation and details of approximation) are:
Tildes denote multiplication by \(\cos\theta/\cos\theta_0\) to account for converging meridians.
The ocean is usually turbulent, and an operational definition of turbulence is that the terms in parentheses (the āeddyā terms) on the right of (2.152)  (2.155)) are of comparable magnitude to the terms on the lefthand side. The terms proportional to the inverse of , instead, are many orders of magnitude smaller than all of the other terms in virtually every oceanic application.
Eddy ViscosityĀ¶
A turbulent closure provides an approximation to the āeddyā terms on the right of the preceding equations. The simplest form of LES is just to increase the viscosity and diffusivity until the viscous and diffusive scales are resolved. That is, we approximate (2.152)  (2.155):
ReynoldsNumber Limited Eddy ViscosityĀ¶
One way of ensuring that the gridscale is sufficiently viscous (i.e., resolved) is to choose the eddy viscosity \(A_h\) so that the gridscale horizontal Reynolds number based on this eddy viscosity, \({\rm Re}_h\), is O(1). That is, if the gridscale is to be viscous, then the viscosity should be chosen to make the viscous terms as large as the advective ones. Bryan et al. (1975) [BMP75] notes that a computational mode is squelched by using \({\rm Re}_h<\)2.
MITgcm users can select horizontal eddy viscosities based on \({\rm Re}_h\) using two methods. 1) The user may estimate the velocity scale expected from the calculation and grid spacing and set viscAh to satisfy \({\rm Re}_h<2\). 2) The user may use viscAhReMax, which ensures that the viscosity is always chosen so that \({\rm Re}_h<\) viscAhReMax. This last option should be used with caution, however, since it effectively implies that viscous terms are fixed in magnitude relative to advective terms. While it may be a useful method for specifying a minimum viscosity with little effort, tests Bryan et al. (1975) [BMP75] have shown that setting viscAhReMax =2 often tends to increase the viscosity substantially over other more āphysicalā parameterizations below, especially in regions where gradients of velocity are small (and thus turbulence may be weak), so perhaps a more liberal value should be used, e.g. viscAhReMax =10.
While it is certainly necessary that viscosity be active at the gridscale, the wavelength where dissipation of energy or enstrophy occurs is not necessarily \(L=A_h/U\). In fact, it is by ensuring that either the dissipation of energy in a 3d turbulent cascade (Smagorinsky) or dissipation of enstrophy in a 2d turbulent cascade (Leith) is resolved that these parameterizations derive their physical meaning.
Vertical Eddy ViscositiesĀ¶
Vertical eddy viscosities are often chosen in a more subjective way, as model stability is not usually as sensitive to vertical viscosity. Usually the āobservedā value from finescale measurements is used (e.g. viscAr\(\approx1\times10^{4} m^2/s\)). However, Smagorinsky (1993) [Sma93] notes that the Smagorinsky parameterization of isotropic turbulence implies a value of the vertical viscosity as well as the horizontal viscosity (see below).
Smagorinsky ViscosityĀ¶
Some suggest (see Smagorinsky 1963 [Sma63]; Smagorinsky 1993 [Sma93]) choosing a viscosity that depends on the resolved motions. Thus, the overall viscous operator has a nonlinear dependence on velocity. Smagorinsky chose his form of viscosity by considering Kolmogorovās ideas about the energy spectrum of 3d isotropic turbulence.
Kolmogorov supposed that energy is injected into the flow at large scales (small \(k\)) and is ācascadedā or transferred conservatively by nonlinear processes to smaller and smaller scales until it is dissipated near the viscous scale. By setting the energy flux through a particular wavenumber \(k\), \(\epsilon\), to be a constant in \(k\), there is only one combination of viscosity and energy flux that has the units of length, the Kolmogorov wavelength. It is \(L_\epsilon(\nu)\propto\pi\epsilon^{1/4}\nu^{3/4}\) (the \(\pi\) stems from conversion from wavenumber to wavelength). To ensure that this viscous scale is resolved in a numerical model, the gridscale should be decreased until \(L_\epsilon(\nu)>L\) (socalled Direct Numerical Simulation, or DNS). Alternatively, an eddy viscosity can be used and the corresponding Kolmogorov length can be made larger than the gridscale, \(L_\epsilon(A_h)\propto\pi\epsilon^{1/4}A_h^{3/4}\) (for Large Eddy Simulation or LES).
There are two methods of ensuring that the Kolmogorov length is resolved in MITgcm. 1) The user can estimate the flux of energy through spectral space for a given simulation and adjust grid spacing or viscAh to ensure that \(L_\epsilon(A_h)>L\); 2) The user may use the approach of Smagorinsky with viscC2Smag, which estimates the energy flux at every grid point, and adjusts the viscosity accordingly.
Smagorinsky formed the energy equation from the momentum equations by dotting them with velocity. There are some complications when using the hydrostatic approximation as described by Smagorinsky (1993) [Sma93]. The positive definite energy dissipation by horizontal viscosity in a hydrostatic flow is \(\nu D^2\), where D is the deformation rate at the viscous scale. According to Kolmogorovās theory, this should be a good approximation to the energy flux at any wavenumber \(\epsilon\approx\nu D^2\). Kolmogorov and Smagorinsky noted that using an eddy viscosity that exceeds the molecular value \(\nu\) should ensure that the energy flux through viscous scale set by the eddy viscosity is the same as it would have been had we resolved all the way to the true viscous scale. That is, \(\epsilon\approx A_{hSmag} \overline D^2\). If we use this approximation to estimate the Kolmogorov viscous length, then
To make \(L_\epsilon(A_{hSmag})\) scale with the gridscale, then
Where the deformation rate appropriate for hydrostatic flows with shallowwater scaling is
The coefficient viscC2Smag is what an MITgcm user sets, and it replaces the proportionality in the Kolmogorov length with an equality. Others (Griffies and Hallberg, 2000 [GH00]) suggest values of viscC2Smag from 2.2 to 4 for oceanic problems. Smagorinsky (1993) [Sma93] shows that values from 0.2 to 0.9 have been used in atmospheric modeling.
Smagorinsky (1993) [Sma93] shows that a corresponding vertical viscosity should be used:
This vertical viscosity is currently not implemented in MITgcm.
Leith ViscosityĀ¶
Leith (1968, 1996) [Lei68] [Lei96] notes that 2d turbulence is quite different from 3d. In twodimensional turbulence, energy cascades to larger scales, so there is no concern about resolving the scales of energy dissipation. Instead, another quantity, enstrophy, (which is the vertical component of vorticity squared) is conserved in 2d turbulence, and it cascades to smaller scales where it is dissipated.
Following a similar argument to that above about energy flux, the enstrophy flux is estimated to be equal to the positivedefinite gridscale dissipation rate of enstrophy \(\eta\approx A_{hLeith} \nabla\overline \omega_3^2\). By dimensional analysis, the enstrophydissipation scale is \(L_\eta(A_{hLeith})\propto\pi A_{hLeith}^{1/2}\eta^{1/6}\). Thus, the Leithestimated length scale of enstrophydissipation and the resulting eddy viscosity are
The runtime flag useFullLeith controls whether or not to calculate the full gradients for the Leith viscosity (.TRUE.) or to use an approximation (.FALSE.). The only reason to set useFullLeith = .FALSE. is if your simulation fails when computing the gradients. This can occur when using the cubed sphere and other complex grids.
Modified Leith ViscosityĀ¶
The argument above for the Leith viscosity parameterization uses concepts from purely 2dimensional turbulence, where the horizontal flow field is assumed to be nondivergent. However, oceanic flows are only quasitwo dimensional. While the barotropic flow, or the flow within isopycnal layers may behave nearly as twodimensional turbulence, there is a possibility that these flows will be divergent. In a highresolution numerical model, these flows may be substantially divergent near the grid scale, and in fact, numerical instabilities exist which are only horizontally divergent and have little vertical vorticity. This causes a difficulty with the Leith viscosity, which can only respond to buildup of vorticity at the grid scale.
MITgcm offers two options for dealing with this problem. 1) The Smagorinsky viscosity can be used instead of Leith, or in conjunction with Leith ā a purely divergent flow does cause an increase in Smagorinsky viscosity; 2) The viscC2LeithD parameter can be set. This is a damping specifically targeting purely divergent instabilities near the gridscale. The combined viscosity has the form:
Whether there is any physical rationale for this correction is unclear, but the numerical consequences are good. The divergence in flows with the grid scale larger or comparable to the Rossby radius is typically much smaller than the vorticity, so this adjustment only rarely adjusts the viscosity if viscC2LeithD = viscC2Leith. However, the rare regions where this viscosity acts are often the locations for the largest vales of vertical velocity in the domain. Since the CFL condition on vertical velocity is often what sets the maximum timestep, this viscosity may substantially increase the allowable timestep without severely compromising the verity of the simulation. Tests have shown that in some calculations, a timestep three times larger was allowed when viscC2LeithD = viscC2Leith.
QuasiGeostrophic Leith ViscosityĀ¶
A variant of Leith viscosity can be derived for quasigeostrophic dynamics. This leads to a slightly different equation for the viscosity that includes a contribution from quasigeostrophic vortex stretching (Bachman et al. 2017 [BFKP17]). The viscosity is given by
where \(\Lambda\) is a tunable parameter of \(\mathcal{O}(1)\), \(\Delta s = \sqrt{\Delta x \Delta y}\) is the grid scale, \(f\mathbf{\hat{z}}\) is the vertical component of the Coriolis parameter, \(\mathbf{v}_{h*}\) is the horizontal velocity, \(N^{2}\) is the BruntVĆ¤isĆ¤lĆ¤ frequency, and \(b\) is the buoyancy.
However, the viscosity given by (2.170) does not constrain purely divergent motions. As such, a small \(\mathcal{O}(\epsilon)\) correction is added
This form is, however, numerically awkward; as the BruntVĆ¤isĆ¤lĆ¤ Frequency becomes very small in regions of weak or vanishing stratification, the vortex stretching term becomes very large. The resulting large viscosities can lead to numerical instabilities. Bachman et al. (2017) [BFKP17] present two limiting forms for the viscosity based on flow parameters such as \(Fr_{*}\), the Froude number, and \(Ro_{*}\), the Rossby number. The second of which,
has been implemented and is active when #define
ALLOW_LEITH_QG is included
in a copy of MOM_COMMON_OPTIONS.h in
a code mods directory (specified through mods command
line option in genmake2).
LeithQG viscosity is designed to work best in simulations that resolve some mesoscale features. In simulations that are too coarse to permit eddies or fine enough to resolve submesoscale features, it should fail gracefully. The nondimensional parameter viscC2LeithQG corresponds to \(\Lambda\) in the above equations and scales the viscosity; the recommended value is 1.
There is no reason to use the quasigeostrophic form of Leith at the same time as either standard Leith or modified Leith. Therefore, the model will not run if nonzero values have been set for these coefficients; the model will stop during the configuration check. LeithQG can be used regardless of the setting for useFullLeith. Just as for the other forms of Leith viscosity, this flag determines whether or not the full gradients are used. The simplified gradients were originally intended for use on complex grids, but have been shown to produce better kinetic energy spectra even on very straightforward grids.
To add the LeithQG viscosity to the GMRedi coefficient, as was done in some of the simulations
in Bachman et al. (2017) [BFKP17], #define
ALLOW_LEITH_QG must be specified,
as described above. In addition to this, the compiletime flag ALLOW_GM_LEITH_QG
must also be defined in a (mods
) copy of GMREDI_OPTIONS.h
when the model is compiled, and the runtime parameter GM_useLeithQG set to .TRUE. in data.gmredi
.
This will use the value of viscC2LeithQG specified in the data
input file to compute the coefficient.
CourantāFreidrichsāLewy Constraint on ViscosityĀ¶
Whatever viscosities are used in the model, the choice is constrained by gridscale and timestep by the CourantāFreidrichsāLewy (CFL) constraint on stability:
The viscosities may be automatically limited to be no greater than these values in MITgcm by specifying viscAhGridMax \(<1\) and viscA4GridMax \(<1\). Similarlyscaled minimum values of viscosities are provided by viscAhGridMin and viscA4GridMin, which if used, should be set to values \(\ll 1\). \(L\) is roughly the gridscale (see below).
Following Griffies and Hallberg (2000) [GH00], we note that there is a factor of \(\Delta x^2/8\) difference between the harmonic and biharmonic viscosities. Thus, whenever a nondimensional harmonic coefficient is used in the MITgcm (e.g. viscAhGridMax \(<1\)), the biharmonic equivalent is scaled so that the same nondimensional value can be used (e.g. viscA4GridMax \(<1\)).
Biharmonic ViscosityĀ¶
Holland (1978) [Hol78] suggested that eddy viscosities ought to be focused on the dynamics at the grid scale, as larger motions would be āresolvedā. To enhance the scale selectivity of the viscous operator, he suggested a biharmonic eddy viscosity instead of a harmonic (or Laplacian) viscosity:
Griffies and Hallberg (2000) [GH00] propose that if one scales the biharmonic viscosity by stability considerations, then the biharmonic viscous terms will be similarly active to harmonic viscous terms at the gridscale of the model, but much less active on larger scale motions. Similarly, a biharmonic diffusivity can be used for less diffusive flows.
In practice, biharmonic viscosity and diffusivity allow a less viscous, yet numerically stable, simulation than harmonic viscosity and diffusivity. However, there is no physical rationale for such operators being of leading order, and more boundary conditions must be specified than for the harmonic operators. If one considers the approximations of (2.157)  (2.160) and (2.173)  (2.176) to be terms in the Taylor series expansions of the eddy terms as functions of the largescale gradient, then one can argue that both harmonic and biharmonic terms would occur in the series, and the only question is the choice of coefficients. Using biharmonic viscosity alone implies that one zeros the first nonvanishing term in the Taylor series, which is unsupported by any fluid theory or observation.
Nonetheless, MITgcm supports a plethora of biharmonic viscosities and diffusivities, which are controlled with parameters named similarly to the harmonic viscosities and diffusivities with the substitution h \(\rightarrow 4\) in the MITgcm parameter name. MITgcm also supports biharmonic Leith and Smagorinsky viscosities:
However, it should be noted that unlike the harmonic forms, the biharmonic scaling does not easily relate to whether energydissipation or enstrophydissipation scales are resolved. If similar arguments are used to estimate these scales and scale them to the gridscale, the resulting biharmonic viscosities should be:
Thus, the biharmonic scaling suggested by Griffies and Hallberg (2000) [GH00] implies:
It is not at all clear that these assumptions ought to hold. Only the Griffies and Hallberg (2000) [GH00] forms are currently implemented in MITgcm.
Selection of Length ScaleĀ¶
Above, the length scale of the grid has been denoted \(L\). However, in strongly anisotropic grids, \(L_x\) and \(L_y\) will be quite different in some locations. In that case, the CFL condition suggests that the minimum of \(L_x\) and \(L_y\) be used. On the other hand, other viscosities which involve whether a particular wavelength is āresolvedā might be better suited to use the maximum of \(L_x\) and \(L_y\). Currently, MITgcm uses useAreaViscLength to select between two options. If false, the square root of the harmonic mean of \(L^2_x\) and \(L^2_y\) is used for all viscosities, which is closer to the minimum and occurs naturally in the CFL constraint. If useAreaViscLength is true, then the square root of the area of the grid cell is used.
Mercator, Nondimensional EquationsĀ¶
The rotating, incompressible, Boussinesq equations of motion (Gill, 1982) [Gil82] on a sphere can be written in Mercator projection about a latitude \(\theta_0\) and geopotential height \(z=rr_0\). The nondimensional form of these equations is:
Where
Dimensional variables are denoted by an asterisk where necessary. If we filter over a grid scale typical for ocean models:
these equations are very well approximated by
Neglecting the nonfrictional terms on the righthand side is usually called the ātraditionalā approximation. It is appropriate, with either large aspect ratio or far from the tropics. This approximation is used here, as it does not affect the form of the eddy stresses which is the main topic. The frictional terms are preserved in this approximate form for later comparison with eddy stresses.
Getting Started with MITgcmĀ¶
This chapter is divided into two main parts. The first part, which is covered in sections Section 3.1 through Section 3.6, contains information about how to download, build and run MITgcm. We believe the best way to familiarize yourself with the model is to run one of the tutorial examples provided in the MITgcm repository (see Section 4), so would suggest newer MITgcm users jump there following a readthrough of the first part of this chapter. Information is also provided in this chapter on how to customize the code when you are ready to try implementing the configuration you have in mind, in the second part (Section 3.8). The code and algorithm are described more fully in Section 2 and Section 6 and chapters thereafter.
In this chapter and others (e.g., chapter Contributing to the MITgcm),
for arguments where the user is expected to replace the text
with a userchosen name, userid, etc., our convention is to show these as uppercase
text surrounded by Ā« Ā»
, such as Ā«USER_MUST_REPLACE_TEXT_HEREĀ»
.
The Ā«
and Ā»
characters are NOT typed when the text is replaced.
Where to find informationĀ¶
There is a webarchived support mailing list for the model that you can email at MITgcmsupport@mitgcm.org once you have subscribed.
To sign up (subscribe) for the mailing list (highly recommended), click here
To browse through the support archive, click here
Obtaining the codeĀ¶
The MITgcm code and documentation are under continuous development and we generally recommend that one downloads the latest version of the code. You will need to decide if you want to work in a āgitawareā environment (Method 1) or with a onetime āstagnantā download (Method 2). We generally recommend method 1, as it is more flexible and allows your version of the code to be regularly updated as MITgcm developers check in bug fixes and new features. However, this typically requires at minimum a rudimentary understanding of git in order to make it worth oneās while.
Periodically we release an official checkpoint (or ātagā). We recommend one download the latest code, unless there are reasons for obtaining a specific checkpoint (e.g. duplicating older results, collaborating with someone using an older release, etc.)
Method 1Ā¶
This section describes how to download gitaware copies of the repository. In a terminal window, cd to the directory where you want your code to reside. Type:
% git clone https://github.com/MITgcm/MITgcm.git
This will download the latest available code. If you now want to revert this
code to a specific checkpoint release, first cd
into the MITgcm directory
you just downloaded, then type git checkout checkpointĀ«XXXĀ»
where Ā«XXXĀ»
is the checkpoint version.
Alternatively, if you prefer to use ssh keys (say for example, you have a firewall which wonāt allow a https download), type:
% git clone git@github.com:MITgcm/MITgcm.git
You will need a GitHub account for this, and will have to generate a ssh key though your GitHub account user settings.
The fully gitaware download is over several hundred MB, which is considerable
if one has limited internet download speed. In comparison, the onetime
download zip file (Method 2, below) is order 100MB. However, one can
obtain a truncated, yet still gitaware copy of the current code by adding
the option depth=1
to the git clone command above; all files will be
present, but it will not include the full git history. However, the repository
can be updated going forward.
Method 2Ā¶
This section describes how to do a onetime download of MITgcm, NOT gitaware.
In a terminal window, cd
to the directory where you want your code to reside.
To obtain the current code, type:
% wget https://github.com/MITgcm/MITgcm/archive/master.zip
For specific checkpoint release XXX
, instead type:
% wget https://github.com/MITgcm/MITgcm/archive/checkpointĀ«XXXĀ».zip
Updating the codeĀ¶
There are several different approaches one can use to obtain updates to MITgcm; which is best for you depends a bit on how you intend to use MITgcm and your knowledge of git (and/or willingness to learn). Below we outline three suggested update pathways:
 Fresh Download of MITgcm
This approach is the most simple, and virtually foolproof. Whether you downloaded the code from a static zip file (Method 2) or used the git clone command (Method 1), create a new directory and repeat this procedure to download a current copy of MITgcm. Say for example you are starting a new research project, this would be a great time to grab the most recent code repository and keep this new work entirely separate from any past simulations. This approach requires no understanding of git, and you are free to make changes to any files in the MIT repo tree (although we generally recommend that you avoid doing so, instead working in new subdirectories or on separate scratch disks as described here, for example).
 Using
git pull
to update the (unmodified) MITgcm repo tree
If you have downloaded the code through a git clone command (Method 1 above), you can incorporate any changes to the source code (including any changes to any files in the MITgcm repository, new packages or analysis routines, etc.) that may have occurred since your original download. There is a simple command to bring all code in the repository to a ācurrent releaseā state. From the MITgcm top directory or any of its subdirectories, type:
% git pull
and all files will be updated to match the current state of the code repository, as it exists
at GitHub. (Note: if you plan to contribute to
MITgcm and followed the steps to download the code as described in
Section 5, you will need to type git pull upstream
instead.)
This update pathway is ideal if you are in the midst of a project and you want to incorporate new MITgcm features into your executable(s), or take advantage of recently added analysis utilties, etc. After the git pull, any changes in model source code and include files will be updated, so you can repeat the build procedure (Section 3.5) and you will include all these new features in your new executable.
Be forewarned, this will only work if you have not modified ANY of the files in the MITgcm repository
(adding new files is ok; also, all verification run subdirectories build
and run
are also ignored by git).
If you have modified files and the git pull
fails with errors, there is no easy fix other than
to learn something about git (continue readingā¦)
 Fully embracing the power of git!
Git offers many tools to help organize and track changes in your work. For example, one might keep separate
projects on different branches, and update the code separately (using git pull
) on these separate branches.
You can even make changes to code in the MIT repo tree; when git then tries to update code from upstream
(see Figure 5.1), it will notify you about possible conflicts and even merge the code changes
together if it can. You can also use git commit
to help you track what you are modifying in your
simulations over time. If youāre planning to submit a pull request to include your changes, you should
read the contributing guide in Section 5, and we suggest you do this model development
in a separate, fresh copy of the code. See Section 5.2 for more information and how
to use git effectively to manage your workflow.
Model and directory structureĀ¶
The ānumericalā model is contained within a execution environment support wrapper. This wrapper is designed to provide a general framework for gridpoint models; MITgcm is a specific numerical model that makes use of this framework (see Section 6.2 for additional detail). Under this structure, the model is split into execution environment support code and conventional numerical model code. The execution environment support code is held under the eesupp directory. The grid point model code is held under the model directory. Code execution actually starts in the eesupp routines and not in the model routines. For this reason the toplevel main.F is in the eesupp/src directory. In general, endusers should not need to worry about the wrapper support code. The toplevel routine for the numerical part of the code is in model/src/the_model_main.F. Here is a brief description of the directory structure of the model under the root tree.
 model: this directory contains the main source code. Also subdivided into two subdirectories: model/inc (includes files) and model/src (source code).
 eesupp: contains the execution environment source code. Also subdivided into two subdirectories: eesupp/inc and eesupp/src.
 pkg: contains the source code for the packages. Each package corresponds to a subdirectory. For example, pkg/gmredi contains the code related to the GentMcWilliams/Redi scheme, pkg/seaice the code for a dynamic seaice model which can be coupled to the ocean model. The packages are described in detail in Section 8 and Section 9].
 doc: contains MITgcm documentation in reStructured Text (rst) format.
 tools: this directory contains various useful tools. For example, genmake2 is a script written in bash that should be used to generate your makefile. The subdirectory tools/build_options contains āoptfilesā with the compiler options for many different compilers and machines that can run MITgcm (see Section 3.5.2.2). This directory also contains subdirectories tools/adjoint_options and tools/OAD_support that are used to generate the tangent linear and adjoint model (see details in Section 7).
 utils: this directory contains various utilities. The utils/matlab subdirectory contains matlab scripts for reading model output directly into matlab. The subdirectory utils/python contains similar routines for python. utils/scripts contains Cshell postprocessing scripts for joining processorbased and tiledbased model output.
 verification: this directory contains the model examples. See Section 4.
 jobs: contains sample job scripts for running MITgcm.
 lsopt: Line search code used for optimization.
 optim: Interface between MITgcm and line search code.
Building the modelĀ¶
Quickstart GuideĀ¶
To compile the code, we use the make
program. This uses a file
(Makefile
) that allows us to preprocess source files, specify
compiler and optimization options and also figures out any file
dependencies. We supply a script (genmake2), described in section
Section 3.5.2, that automatically generates the Makefile
for you. You
then need to build the dependencies and compile the code (Section 3.5.3).
As an example, assume that you want to build and run experiment verification/exp2. Letās build the code in verification/exp2/build:
% cd verification/exp2/build
First, generate the Makefile
:
% ../../../tools/genmake2 mods ../code optfile Ā«/PATH/TO/OPTFILEĀ»
The mods
command line option tells genmake2 to override model source code
with any files in the subdirectory ../code
(here, you need to configure the size
of the model domain by overriding MITgcmās default SIZE.h
with an edited copy ../code/SIZE.h
containing the specific domain size for exp2).
The optfile
command line option tells genmake2
to run the specified optfile, a bash shell script,
during genmake2ās execution.
An optfile typically contains
definitions of environment variables,
paths, compiler options, and anything else that needs to be set in order to compile on your
local computer system or cluster with your specific Fortan compiler. As an example, we might
replace Ā«/PATH/TO/OPTFILEĀ»
with ../../../tools/build_options/linux_amd64_ifort11
for use with the Intel Fortran compiler
(version 11 and above) on a linux x86_64 platform.
This and many other optfiles for common systems and Fortran compilers are located in tools/build_options.
mods
, optfile
, and many additional genmake2 command line options are described
more fully in Section 3.5.2.1. Detailed instructions on building with
MPI are given in Section 3.5.4.
Once a Makefile
has been generated, we create the dependencies with
the command:
% make depend
It is important to note that the make depend
stage will occasionally
produce warnings or errors if the dependency parsing tool is unable
to find all of the necessary header files (e.g., netcdf.inc
, or worse,
say it cannot find a Fortran compiler in your path). In some cases you
may need to obtain help from your system administrator to locate these files.
Next, one can compile the code using:
% make
Assuming no errors occurred, the make
command creates an executable called mitgcmuv
.
Now you are ready to run the model. General instructions for doing so are given in section Section 3.6.
Generating a Makefile
using genmake2Ā¶
A shell script called genmake2
for generating a Makefile
is included as part of MITgcm.
Typically genmake2
is used in a sequence of steps as shown below:
% ../../../tools/genmake2 mods ../code optfile Ā«/PATH/TO/OPTFILEĀ»
% make depend
% make
The first step above creates a unixstyle Makefile
. The Makefile
is used by make
to specify how to compile the MITgcm source files (for more detailed descriptions of what the make
tools
are, and how they are used, see here).
This section describes details and capabilities of genmake2, located in the
tools directory. genmake2 is a shell
script written to work in bash (and with all āshāācompatible shells including
Bourne shells). Like many unix tools, there is a help option that is invoked thru genmake2 h
.
genmake2 parses information from the following sources, in this order:
 Commandline options (see Section 3.5.2.1)
 A
genmake_local
file if one is found in the current directory. This is a bash shell script that is executed prior to the optfile (see step #3), used in some special model configurations and/or to set some options that can affect which lines of the optfile are executed. For example, this genmake_local file is required for a special setup, building a āMITgcm couplerā executable; in a more typical setup, one will not require agenmake_local
file.  An āoptions fileā a.k.a. optfile
(a bash shell script) specified by the commandline option
āoptfile Ā«/PATH/TO/OPTFILEĀ»
, as mentioned briefly in Section 3.5.1 and described in detail in Section 3.5.2.2.  A
packages.conf
file (if one is found) with the specific list of packages to compile (see Section 8.1.1). The search path for filepackages.conf
is first the current directory, and then each of themods
directories in the given order (as described here).
When you run the genmake2 script, typical output might be as follows:
% ../../../tools/genmake2 mods ../code optfile ../../../tools/build_options/linux_amd64_gfortran
GENMAKE :
A program for GENerating MAKEfiles for the MITgcm project.
For a quick list of options, use "genmake2 h"
or for more detail see the documentation, section "Building the model"
(under "Getting Started") at: https://mitgcm.readthedocs.io/
=== Processing options files and arguments ===
getting local config information: none found
Warning: ROOTDIR was not specified ; try using a local copy of MITgcm found at "../../.."
getting OPTFILE information:
using OPTFILE="../../../tools/build_options/linux_amd64_gfortran"
getting AD_OPTFILE information:
using AD_OPTFILE="../../../tools/adjoint_options/adjoint_default"
check Fortran Compiler... pass (set FC_CHECK=5/5)
check makedepend (local: 0, system: 1, 1)
=== Checking system libraries ===
Do we have the system() command using gfortran... yes
Do we have the fdate() command using gfortran... yes
Do we have the etime() command using gfortran... c,r: yes (SbR)
Can we call simple C routines (here, "cloc()") using gfortran... yes
Can we unlimit the stack size using gfortran... yes
Can we register a signal handler using gfortran... yes
Can we use stat() through C calls... yes
Can we create NetCDFenabled binaries... yes
skip check for LAPACK Libs
Can we call FLUSH intrinsic subroutine... yes
=== Setting defaults ===
Adding MODS directories: ../code
Making source files in eesupp from templates
Making source files in pkg/exch2 from templates
Making source files in pkg/regrid from templates
=== Determining package settings ===
getting package dependency info from ../../../pkg/pkg_depend
getting package groups info from ../../../pkg/pkg_groups
checking list of packages to compile:
using PKG_LIST="../code/packages.conf"
before group expansion packages are: oceanic kpp gmredi cd_code
replacing "oceanic" with: gfd gmredi kpp
replacing "gfd" with: mom_common mom_fluxform mom_vecinv generic_advdiff debug mdsio rw monitor
after group expansion packages are: mom_common mom_fluxform mom_vecinv generic_advdiff debug mdsio rw monitor gmredi kpp kpp gmredi cd_code
applying DISABLE settings
applying ENABLE settings
packages are: cd_code debug generic_advdiff mdsio mom_common mom_fluxform mom_vecinv monitor rw
applying package dependency rules
packages are: cd_code debug generic_advdiff mdsio mom_common mom_fluxform mom_vecinv monitor rw
Adding STANDARDDIRS='eesupp model'
Searching for *OPTIONS.h files in order to warn about the presence
of "#define "type statements that are no longer allowed:
found CPP_EEOPTIONS="../../../eesupp/inc/CPP_EEOPTIONS.h"
found CPP_OPTIONS="../../../model/inc/CPP_OPTIONS.h"
Creating the list of files for the adjoint compiler.
=== Creating the Makefile ===
setting INCLUDES
Determining the list of source and include files
Writing makefile: Makefile
Add the source list for AD code generation
Making list of "exceptions" that need ".p" files
Making list of NOOPTFILES
Add rules for links
Adding makedepend marker
=== Done ===
original 'Makefile' generated successfully
=> next steps:
> make depend
> make (< to generate executable)
In the above, notice:
 we did not specify
ROOTDIR
, i.e., a path to your MITgcm repository, but here we are building code from within the repository (specifically, in one of the verification subdirectory experiments). As such, genmake2 was smart enough to locate all necessary files on its own. To specify a remoteROOTDIR
, see here.  we specified the optfile linux_amd64_gfortran based on the computer system and Fortran compiler we used (here, a linux 64bit machine with gfortran installed).
 genmake2 did some simple checking on availability
of certain system libraries; all were found (except LAPACK,
which was not checked since it is not needed here).
NetCDF only requires a āyesā
if you want to write netCDF output;
more specifically, a ānoā response to āCan we create NetCDFenabled binariesā will disable
including pkg/mnc and switch to output plain binary files.
While the makefile can still be built with other ānoā responses,
sometimes this will foretell errors during the
make depend
ormake
commands.  any
.F
or.h
files in themods
directory../code
will also be compiled, overriding any MITgcm repository versions of files, if they exist.  a handful of packages are being used in this build; see Section 8.1.1 for more detail about how to enable and disable packages.
 genmake2 terminated without error (note output at end
after
=== Done ===
), generatingMakefile
and a log filegenmake.log
. As mentioned, this does not guarantee that your setup will compile properly, but if there are errors duringmake depend
ormake
, these error messages and/or the standard output from genmake2 orgenmake.log
may provide clues as to the problem. If instead genmake2 finishes with a warning messageWarning: FORTRAN compiler test failed
, this means that genmake2 is unable to locate the Fortran compiler or pass a trivial āhello worldā Fortran compilation test. In this case, you should seegenmake.log
for errors and/or seek assistance from your system administrator; these tests need to pass in order to proceed to themake
steps.
Commandline options:Ā¶
genmake2 supports a number of helpful commandline options. A complete list of these options can be obtained by:
% genmake2 h
The most important commandline options are:
āoptfile Ā«/PATH/TO/OPTFILEĀ»
(or shorter:
of
) specifies the optfile that should be used for a particular build.If no optfile is specified through the command line, genmake2 will try to make a reasonable guess from the list provided in tools/build_options. The method used for making this guess is to first determine the combination of operating system and hardware and then find a working Fortran compiler within the userās path. When these three items have been identified, genmake2 will try to find an optfile that has a matching name. See Section 3.5.2.2.
āmods 'Ā«DIR1 DIR2 DIR3 ...Ā»'
specifies a list of directories containing āmodificationsā. These directories contain files with names that may (or may not) exist in the main MITgcm source tree but will be overridden by any identicallynamed sources within the
mods
directories. Note the quotes around the list of directories, necessary given multiple arguments.The order of precedence for versions of files with identical names:
 āmodsā directories in the order given (e.g., will use copy of file located in DIR1 instead of DIR2)
 Packages either explicitly specified or included by default
 Packages included due to package dependencies
 The āstandard dirsā (which may have been specified by the
standarddirs
option below)
rootdir Ā«/PATH/TO/MITGCMDIRĀ»
specify the location of the MITgcm repository top directory (
ROOTDIR
). By default, genmake2 will try to find this location by looking in parent directories from where genmake2 is executed (up to 5 directory levels above the current directory).In the quickstart example above (Section 3.5.1) we built the executable in the
build
directory of the experiment. Below, we show how to configure and compile the code on a scratch disk, without having to copy the entire source tree. The only requirement is that you have genmake2 in your $PATH, or you know the absolute path to genmake2. In general, one can compile the code in any given directory by following this procedure. Assuming the model source is in~/MITgcm
, then the following commands will build the model in/scratch/exp2run1
:% cd /scratch/exp2run1 % ~/MITgcm/tools/genmake2 rootdir ~/MITgcm mods ~/MITgcm/verification/exp2/code % make depend % make
As an alternative to specifying the MITgcm repository location through the
rootdir
commandline option, genmake2 recognizes the environment variable$MITGCM_ROOTDIR
.standarddirs Ā«/PATH/TO/STANDARDDIRĀ»
 specify a path to the standard MITgcm directories for source and includes files.
By default, model and eesupp
directories (
src
andinc
) are the āstandard dirsā. This command can be used to reset these default standard directories, or instead NOT include either model or eesupp as done in some specialized configurations. oad
 generates a makefile for an OpenAD build (see Section 7.5)
āadoptfile Ā«/PATH/TO/FILEĀ»
(or shorter:
adof
) specifies the āadjointā or automatic differentiation options file to be used. The file is analogous to the optfile defined above but it specifies information for the AD build process. See Section 7.2.3.4.The default file is located in tools/adjoint_options/adjoint_default and it defines the āTAFā and āTAMCā compiler options.
āmpi
 enables certain MPI features (using CPP
#define
) within the code and is necessary for MPI builds (see Section 3.5.4). āomp
 enables OpenMP code and compiler flag
OMPFLAG
(see Section 3.5.5). āieee
 use IEEE numerics (requires support in optfile). This option is typically a good choice if one wants to compare output from different machines running the same code. Note using IEEE disables all compiler optimizations.
devel
 use IEEE numerics (requires support in optfile) and add additional compiler options to check array bounds and add other additional warning and debugging flags.
āmake Ā«/PATH/TO/GMAKEĀ»
 due to the poor handling of softlinks and other bugs common with
the
make
versions provided by commercial unix vendors, GNUmake
(sometimes calledgmake
) may be preferred. This option provides a means for specifying the make executable to be used.
While it is possible to use genmake2 commandline options
to set the Fortran or C compiler name (fc
and cc
respectively),
we generally recommend setting these through an optfile,
as discussed in Section 3.5.2.2.
Other genmake2 options are available to
enable performance/timing analyses, etc.; see genmake2 h
for more info.
Optfiles in tools/build_options directory:Ā¶
The purpose of the optfiles is to provide all the compilation options for particular āplatformsā (where āplatformā roughly means the combination of the hardware and the compiler) and code configurations. Given the combinations of possible compilers and library dependencies (e.g., MPI or netCDF) there may be numerous optfiles available for a single machine. The naming scheme for the majority of the optfiles shipped with the code is OS_HARDWARE_COMPILER where
 OS
 is the name of the operating system (generally the lowercase output
of a linux terminal
uname
command)  HARDWARE
is a string that describes the CPU type and corresponds to output from a
uname m
command. Some common CPU types:amd64
 use this code for x86_64 systems (most common, including AMD and Intel 64bit CPUs)
ia64
 is for Intel IA64 systems (eg. Itanium, Itanium2)
ppc
 is for (old) Mac PowerPC systems
 COMPILER
 is the compiler name (generally, the name of the Fortran compiler executable). MITgcm is primarily written in FORTRAN 77. Compiling the code requires a FORTRAN 77 compiler. Any more recent compiler which is backwards compatible with FORTRAN 77 can also be used; for example, the model will build successfully with a Fortran 90 or Fortran 95 compiler. A C99 compatible compiler is also need, together with a C preprocessor . Some optional packages make use of Fortran 90 constructs (either freeform formatting, or dynamic memory allocation); as such, setups which use these packages require a Fortran 90 or later compiler build.
There are existing optfiles that work with many common hardware/compiler configurations; we first suggest you peruse the list in tools/build_options and try to find your platform/compiler configuration. These are the most common:
The above optfiles are all for linux x86_64 (64bit) systems, utilized in many large highperformance computing centers.
All of the above will work with singlethreaded, MPI,
or shared memory (OpenMP) code configurations.
gfortran is GNU Fortran,
ifort is Intel Fortran,
pgf77 is PGI Fortran (formerly known as āThe Portland Groupā).
Note in the above list there are two ifort
optfiles:
linux_amd64_ifort+impi
is for a specific case of using ifort
with the
Intel MPI library (a.k.a. impi
),
which requires special define statements in the optfile (in contrast with
Open MPI or MVAPICH2
libraries; see Section 3.5.4). Note that both ifort optfiles require ifort version 11 or higher.
Many clusters nowadays use environment modules,
which allows one to easily choose which compiler to use through module load Ā«MODULENAMEĀ»
,
automatically configuring your environment for a specific compiler choice
(type echo $PATH
to see where genmake2 will look for compilers and system software).
In most cases, your platform configuration will be included in the available optfiles
list and will result in a
usable Makefile
being generated. If you are unsure which optfile is correct for your configuration,
you can try not specifying an optfile; on some systems the
genmake2 program will be able to automatically
recognize the hardware, find a compiler and other tools within the userās
path, and then make a best guess as to an appropriate optfile
from the list in the tools/build_options directory.
However, for some platforms and code configurations, new
optfiles must be written. To create a new optfile, it is generally
best to start with one of the defaults and modify it to suit your needs.
Like genmake2, the optfiles are all written in bash (or using a simple
shācompatible syntax). While nearly all
environment variables used within
genmake2 may be specified in the optfiles, the critical ones that
should be defined are:
FC
 the Fortran compiler (executable) to use on
.F
files, e.g.,ifort
orgfortran
, or if using MPI, the mpiwrapper equivalent, e.g.,mpif77
F90C
 the Fortran compiler to use on
.F90
files (only necessary if your setup includes a package which contains.F90
source code) CC
 similarly, the C compiler to use, e.g.,
icc
orgcc
, or if using MPI, the mpiwrapper equivalent, e.g.,mpicc
DEFINES
 commandline options passed to the compiler
CPP
 the C preprocessor to use, and any necessary commandline options, e.g.
cpp traditional P
CFLAGS
,FFLAGS
 commandline compiler flags required for your C and Fortran compilers, respectively, to compile and execute properly. See your C and Fortran compiler documentation for specific options and syntax.
FOPTIM
 commandline optimization Fortran compiler settings. See your Fortran compiler documentation for specific options and syntax.
NOOPTFLAGS
 commandline settings for special files that should not be optimized using the
FOPTIM
flags NOOPTFILES
 list of source code files that should be compiled using
NOOPTFLAGS
settings INCLUDES
 path for additional files (e.g.,
netcdf.inc
,mpif.h
) to include in the compilation using the commandlineI
option INCLUDEDIRS
 path for additional files to be included in the compilation
LIBS
 path for additional library files that need to be linked to generate the final executable, e.g.,
libnetcdf.a
For example, an excerpt from an optfile which specifies several of these variables (here, for the linuxamd64 architecture using the PGI Fortran compiler) is as follows:
if test "x$MPI" = xtrue ; then
CC=mpicc
FC=mpif77
F90C=mpif90
else
CC=pgcc
FC=pgf77
F90C=pgf90
fi
DEFINES="DWORDLENGTH=4"
if test "x$ALWAYS_USE_F90" = x1 ; then
FC=$F90C
else
DEFINES="$DEFINES DNML_EXTENDED_F77"
fi
CPP='cpp traditional P'
F90FIXEDFORMAT='Mfixed'
EXTENDED_SRC_FLAG='Mextend'
GET_FC_VERSION="V"
OMPFLAG='mp'
NOOPTFLAGS='O0'
NOOPTFILES=''
FFLAGS="$FFLAGS byteswapio Ktrap=fp"
# might want to use 'r8' for fizhi pkg:
#FFLAGS="$FFLAGS r8"
if test "x$IEEE" = x ; then # with optimisation:
FOPTIM='tp k864 pc=64 O2 Mvect=sse'
#FOPTIM="$FOPTIM fastsse O3 Msmart Mvect=cachesize:1048576,transform"
else # no optimisation + IEEE :
#FFLAGS="$FFLAGS Mdclchk" # pkg/zonal_filt does not pass with declarationcheck
FOPTIM='pc=64 O0 Kieee'
fi
F90FLAGS=$FFLAGS
F90OPTIM=$FOPTIM
The above list of environment variables typically specified in an optfile is by no means complete; additional variables may be required for your specific setup and/or your specific Fortran (or C) compiler.
If you write an optfile for an unrepresented machine or compiler, you are strongly encouraged to submit the optfile to the MITgcm project for inclusion. MITgcm developers are willing to provide help writing or modifing optfiles. Please submit the file through the GitHub issue tracker or email the MITgcmsupport@mitgcm.org list.
Instructions on how to use optfiles to build MPIenabled executables is presented in Section 3.5.4.
make
commandsĀ¶
Following a successful build of Makefile
, type make depend
. This command
modifies the Makefile
by attaching a (usually, long) list of
files upon which other files depend. The purpose of this is to reduce
recompilation if and when you start to modify the code. The make depend
command also creates local links for all source files from the source directories
(see āmodsā description in Section 3.5.2.1), so that
all source files to be used are visible from the local build directory,
either as hardcopy or as symbolic link.
IMPORTANT NOTE: Editing the source code files in the build directory
will not edit a local copy (since these are just links) but will
edit the original files in model/src (or model/inc)
or in the specified mods
directory. While the latter might
be what you intend, editing the master copy in model/src
is usually NOT what is intended and may cause grief somewhere down the road.
Rather, if you need to add
to the list of modified source code files, place a copy of
the file(s) to edit in the mods
directory, make the edits to
these mods
directory files, go back to the build directory and type make Clean
,
and then rebuild the makefile (these latter steps critical or the makefile will not
link to this newly edited file).
The final make
invokes the C preprocessor
to produce the ālittle fā files (*.f
and *.f90
) and then compiles them
to object code using the specified Fortran compiler and options.
The C preprocessor step converts a number of CPP macros and #ifdef
statements to actual Fortran and
expands Cstyle #include
statements to incorporate header files into the
ālittle fā files. CPP style macros and #ifdef
statements are used to support generating
different compile code for different model configurations.
The result of the build process is an executable with the name
mitgcmuv
.
Additional make ātargetsā are defined within the makefile to aid in the production of adjoint (Section 7.2.2) and other versions of MITgcm.
On computers with multiple processor cores, the build process can often be sped up appreciably using the command:
% make j 2
where the ā2ā can be replaced with a number that corresponds to the number of cores (or discrete CPUs) available.
In addition, there are several housekeeping make clean
options that might be useful:
make clean
removes files thatmake
generates (e.g., *.o and *.f files)make Clean
removes files and links generated bymake
andmake depend
; strongly recommended for āuncleanā directories which may contain the (perhaps partial) results of previous buildsmake CLEAN
removes pretty much everything, including any executables and output from genmake2
Building with MPIĀ¶
Building MITgcm to use MPI libraries can be complicated due to the variety of different MPI implementations available, their dependencies or interactions with different compilers, and their often adhoc locations within file systems. For these reasons, its generally a good idea to start by finding and reading the documentation for your machine(s) and, if necessary, seeking help from your local systems administrator.
The steps for building MITgcm with MPI support are:
Make sure you have MPI libraries installed on your computer system or cluster. Different Fortran compilers (and different versions of a specific compiler) will generally require a custom version (of a MPI library) built specifically for it. On environment moduleenabled clusters, one typically must first load a Fortran compiler, then specific MPI libraries for that compiler will become available to load. If libraries are not installed, MPI implementations and related tools are available including:
Ask you systems administrator for assistance in installing these libraries.
Determine the location of your MPI library āwrapperā Fortran compiler, e.g.,
mpif77
ormpifort
etc. which will be used instead of the name of the fortran compiler (gfortran
,ifort
,pgi77
etc.) to compile your code. Often the directory in which these wrappers are located will be automatically added to your $PATH environment variable when you perform amodule load Ā«SOME_MPI_MODULEĀ»
; thus, you will not need to do anything beyond the module load itself. If you are on a cluster that does not support environment modules, you may have to manually add this directory to your path, e.g., typePATH=$PATH:Ā«ADD_ADDITIONAL_PATH_TO_MPI_WRAPPER_HEREĀ»
in a bash shell.Determine the location of the includes file
mpif.h
and any other MPIrelated includes files. Often these files will be located in a subdirectory off the main MPI libraryinclude/
. In all optfiles in tools/build_options, it is assumed environment variable$MPI_INC_DIR
specifies this location;$MPI_INC_DIR
should be set in your terminal session prior to generating aMakefile
.Determine how many processors (i.e., CPU cores) you will be using in your run, and modify your configurationās SIZE.h (located in a āmodified codeā directory, as specified in your genmake2 commandline). In SIZE.h, you will need to set variables nPx*nPy to match the number of processors you will specify in your run scriptās MITgcm execution statement (i.e., typically
mpirun
or some similar command, see Section 3.6.1). Note that MITgcm does not use dynamic memory allocation (a feature of Fortran 90, not FORTRAN 77), so all array sizes, and hence the number of processors to be used in your MPI run, must be specified at compiletime in addition to runtime. More information about the MITgcm WRAPPER, domain decomposition, and how to configure SIZE.h can be found in Section 6.3.Build the code with the genmake2
mpi
option using commands such as:% ../../../tools/genmake2 mods=../code mpi of=Ā«/PATH/TO/OPTFILEĀ» % make depend % make
Building with OpenMPĀ¶
Unlike MPI, which requires installation of additional software support libraries, using shared memory
(OpenMP) for multithreaded
executable builds can be accomplished simply through the genmake2
commandline option omp
:
% ../../../tools/genmake2 mods=../code omp of=Ā«/PATH/TO/OPTFILEĀ» % make depend % make
While the most common optfiles specified in Section 3.5.2.2 include support for the omp
option,
some optfiles in tools/build_options do not include support for multithreaded executable builds.
Before using one of the less common optfiles, check whether OMPFLAG
is defined.
Note that one does not need to specify the number of threads until runtime (see Section 3.6.2). However, the default maximum number of threads in MITgcm is set to a (low) value of 4, so if you plan on more you will need to change this value in eesupp/inc/EEPARAMS.h in your modified code directory.
Running the modelĀ¶
If compilation finished successfully (Section 3.5) then an
executable called mitgcmuv
will now exist in the local (build
) directory.
To run the model as a single process (i.e., not in parallel) simply
type (assuming you are still in the build
directory):
% cd ../run
% ln s ../input/* .
% cp ../build/mitgcmuv .
% ./mitgcmuv
Here, we are making a link to all the support data files (in ../input/
) needed by the MITgcm
for this experiment, and then copying the executable from the the build directory.
The ./
in the last step is a safeguard to make sure you use the local executable in
case you have others that might exist in your $PATH
.
The above command will spew out many lines of text output to your
screen. This output contains details such as parameter values as well as
diagnostics such as mean kinetic energy, largest CFL number, etc. It is
worth keeping this text output with the binary output so we normally
redirect the stdout
stream as follows:
% ./mitgcmuv > output.txt
In the event that the model encounters an error and stops, it is very
helpful to include the last few line of this output.txt
file along
with the (stderr
) error message within any bug reports.
For the example experiment in verification/exp2, an example of the
output is kept in verification/exp2/results/output.txt for comparison. You can compare
your output.txt
with the corresponding one for that experiment to
check that your setup indeed works. Congratulations!
Running with MPIĀ¶
Run the code with the appropriate MPI ārunā or āexecā program provided with your particular implementation of MPI. Typical MPI packages such as Open MPI will use something like:
% mpirun np 4 ./mitgcmuv
Sightly more complicated scripts may be needed for many machines since execution of the code may be controlled by both the MPI library and a job scheduling and queueing system such as Slurm, PBS/TORQUE, LoadLeveler, or any of a number of similar tools. See your local cluster documentation or system administrator for the specific syntax required to run on your computing facility.
Running with OpenMPĀ¶
Assuming the executable mitgcmuv
was built with OpenMP (see Section 3.5.5),
the syntax to run a multithreaded simulation is the same as running singlethreaded
(see Section 3.6), except that the following additional steps are required beforehand:
 Environment variables
for the number of threads and the stacksize need to be set prior to executing the model.
The exact names of these environment variables differ
by Fortran compiler, but are typically some variant of
OMP_NUM_THREADS
andOMP_STACKSIZE
, respectively. For the latter, in your run script we recommend adding the lineexport OMP_STACKSIZE=400M
(or for a C shellvariant,setenv OMP_STACKSIZE 400M
). If this stacksize setting is insufficient, MITgcm will crash, in which case a larger number can be used. Similarly,OMP_NUM_THREADS
should be set to the exact number of threads you require.  In file
eedata
you will need to change namelist parameters nTx and nTy to reflect the number of threads in x and y, respectively (for a singlethreaded run, nTx =nTy=1). The value of nTx *nTy must equal the value of environment variableOMP_NUM_THREADS
(or its nameequivalent for your Fortan compiler) or MITgcm will terminate during its initialization with an error message.
MITgcm will take the number of tiles used in the model (as specified in SIZE.h)
and the number of threads (nTx and nTy from file eedata
),
and in running will spread the tiles out evenly across the threads. This is done independently for x and y. As such,
the number of tiles in x (variable nSx as defined in SIZE.h) must divide evenly by
the number of threads in x (namelist parameter nTx),
and similarly for nSy and nTy, else MITgcm will terminate on initialization.
More information about the MITgcm
WRAPPER, domain decomposition, and how to configure SIZE.h
can be found in Section 6.3.
Output filesĀ¶
The model produces various output files and, when using pkg/mnc
(i.e., netCDF),
sometimes even directories. Depending upon the I/O package(s) selected
at compile time (either pkg/mdsio, pkg/mnc, or both as determined by
packages.conf
) and the runtime flags set (in
data.pkg
), the following output may appear. More complete information describing output files
and model diagnostics is described in Section 9.
Raw binary output filesĀ¶
The ātraditionalā output files are generated by the pkg/mdsio
(see Section 9.2).The pkg/mdsio model data are written according to a
āmeta/dataā file format. Each variable is associated with two files with
suffix names .data
and .meta
. The .data
file contains the
data written in binary form (big endian by default). The .meta
file
is a āheaderā file that contains information about the size and the
structure of the .data
file. This way of organizing the output is
particularly useful when running multiprocessors calculations.
At a minimum, the instantaneous āstateā of the model is written out, which is made of the following files:
U.00000nIter
 zonal component of velocity field (m/s and positive eastward).V.00000nIter
 meridional component of velocity field (m/s and positive northward).W.00000nIter
 vertical component of velocity field (ocean: m/s and positive upward, atmosphere: Pa/s and positive towards increasing pressure i.e., downward).T.00000nIter
 potential temperature (ocean: \(^{\circ}\mathrm{C}\), atmosphere: \(^{\circ}\mathrm{K}\)).S.00000nIter
 ocean: salinity (psu), atmosphere: water vapor (g/kg).Eta.00000nIter
 ocean: surface elevation (m), atmosphere: surface pressure anomaly (Pa).
The chain 00000nIter
consists of ten figures that specify the
iteration number at which the output is written out. For example,
U.0000000300
is the zonal velocity at iteration 300.
In addition, a āpickupā or ācheckpointā file called:
pickup.00000nIter
is written out. This file represents the state of the model in a condensed form and is used for restarting the integration (at the specific iteration number). Some additional parameterizations and packages also produce separate pickup files, e.g.,
pickup_cd.00000nIter
if the CD scheme is used (see C_D Scheme)pickup_seaice.00000nIter
if the seaice package is turned on (see SEAICE Package)pickup_ptracers.00000nIter
if passive tracers are included in the simulation (see PTRACERS Package)
Rolling checkpoint files are
the same as the pickup files but are named differently. Their name
contain the chain ckptA
or ckptB
instead of 00000nIter
. They
can be used to restart the model but are overwritten every other time
they are output to save disk space during long integrations.
NetCDF output filesĀ¶
pkg/mnc is a set of routines written to read, write, and
append netCDF files.
Unlike the pkg/mdsio output, the pkg/mncāgenerated output is usually
placed within a subdirectory with a name such as mnc_output_
(by default,
netCDF tries to append, rather than overwrite, existing files,
so a unique output directory is helpful for each separate run).
The pkg/mnc output files are all in the āselfdescribingā netCDF format and can thus be browsed and/or plotted using tools such as:
 ncdump is a utility which is typically included with every netCDF install, and converts the netCDF binaries into formatted ASCII text files.
 ncview is a very convenient and quick way to plot netCDF data and it runs on most platforms. Panoply is a similar alternative.
 MATLAB, GrADS, IDL and other common postprocessing environments provide builtin netCDF interfaces.
Looking at the outputĀ¶
MATLABĀ¶
Raw binary outputĀ¶
The repository includes a few MATLAB utilities to read binary output files written in the /pkg/mdsio format. The MATLAB scripts are located in the directory utils/matlab under the root tree. The script utils/matlab/rdmds.m reads the data. Look at the comments inside the script to see how to use it.
Some examples of reading and visualizing some output in Matlab:
% matlab
>> H=rdmds('Depth');
>> contourf(H');colorbar;
>> title('Depth of fluid as used by model');
>> eta=rdmds('Eta',10);
>> imagesc(eta');axis ij;colorbar;
>> title('Surface height at iter=10');
>> eta=rdmds('Eta',[0:10:100]);
>> for n=1:11; imagesc(eta(:,:,n)');axis ij;colorbar;pause(.5);end
NetCDF outputĀ¶
Similar scripts for netCDF output (e.g., utils/matlab/rdmnc.m) are available and they are described in Section 9.3.
PythonĀ¶
Raw binary outputĀ¶
The repository includes Python scripts for reading binary /pkg/mdsio format under utils/python. The following example shows how to load in some data:
# python
import mds
Eta = mds.rdmds('Eta', itrs=10)
The docstring for mds.rdmds
(see file utils/python/MITgcmutils/MITgcmutils/mds.py)
contains much more detail about using this function and the options that it takes.
NetCDF outputĀ¶
The netCDF output is currently produced with one file per processor. This means the individual tiles need to be stitched together to create a single netCDF file that spans the model domain. The script utils/python/MITgcmutils/scripts/gluemncbig can do this efficiently from the command line.
The following example shows how to use the xarray python package to read the resulting netCDF file into Python:
# python
import xarray as xr
Eta = xr.open_dataset('Eta.nc')
Customizing the Model Configuration  Code Parameters and Compilation OptionsĀ¶
Model Array DimensionsĀ¶
MITgcmās array dimensions need to be configured for each unique model domain. The size of each tile (in dimensions \(x\), \(y\), and vertical coordinate \(r\)) the āoverlapā region of each tile (in \(x\) and \(y\)), the number of tiles in the \(x\) and \(y\) dimensions, and the number of processes (using MPI) in the \(x\) and \(y\) dimensions all need to be specified in SIZE.h. From these parameters, global domainsize variables Nx, Ny are computed by the model. See a more detailed discussion of SIZE.h parameters in the barotropic gyre tutorial and a more technical discussion in Section 6.3.1.
Parameter  Default SIZE.h  Description 

sNx  30  number of points in \(x\) dimension in a single tile 
sNy  15  number of points in \(y\) dimension in a single tile 
Nr  4  number of points in \(r\) dimension 
OLx  2  number of āoverlapā points in \(x\) dimension for a tile 
OLy  2  number of āoverlapā points in \(y\) dimension for a tile 
nSx  2  number of tile per process in \(x\) dimension 
nSy  4  number of tile per process in \(y\) dimension 
nPx  1  number of processes in \(x\) dimension 
nPy  1  number of processes in \(y\) dimension 
Note the repository version of SIZE.h includes several lines of text at the top that will halt compilation with errors. Thus, to use MITgcm you will need to copy SIZE.h to a code modification directory and make edits, including deleting or commenting out the offending lines of text.
C Preprocessor OptionsĀ¶
The CPP flags relative to the ānumerical modelā part of the code are defined and set in the file CPP_OPTIONS.h in the directory model/inc/. In the parameter tables in Section 3.8 we have noted CPP options that need to be changed from the default to enable specific runtime parameter to be used properly. Also note many of the options below are for lesscommon situations or are somewhat obscure, so newer users of the MITgcm are encouraged to jump to Section 3.8 where more basic runtime parameters are discussed.
CPP Flag Name  Default  Description 

SHORTWAVE_HEATING  #undef  provide separate shortwave heating file, allowing shortwave to penetrate below surface layer 
ALLOW_GEOTHERMAL_FLUX  #undef  include code for applying geothermal heat flux at the bottom of the ocean 
ALLOW_FRICTION_HEATING  #undef  include code to allow heating due to friction (and momentum dissipation) 
ALLOW_ADDFLUID  #undef  allow mass source or sink of fluid in the interior (3D generalization of oceanic realfresh water flux) 
ATMOSPHERIC_LOADING  #define  include code for atmospheric pressureloading (and seaiceloading) on ocean surface 
ALLOW_BALANCE_FLUXES  #undef  include balancing surface forcing fluxes code 
ALLOW_BALANCE_RELAX  #undef  include balancing surface forcing relaxation code 
CHECK_SALINITY_FOR_NEGATIVE_VALUES  #undef  include code checking for negative salinity 
EXCLUDE_FFIELDS_LOAD  #undef  exclude external forcingfields load; code allows reading and simple linear time interpolation of oceanic forcing fields, if no specific pkg (e.g., pkg/exf) is used to compute them 
INCLUDE_PHIHYD_CALCULATION_CODE  #define  include code to calculate \(\phi_{hyd}\) 
INCLUDE_CONVECT_CALL  #define  include code for convective adjustment mixing algorithm 
INCLUDE_CALC_DIFFUSIVITY_CALL  #define  include codes that calculates (tracer) diffusivities and viscosities 
ALLOW_3D_DIFFKR  #undef  allow full 3D specification of vertical diffusivity 
ALLOW_BL79_LAT_VARY  #undef  allow latitudinally varying Bryan and Lewis 1979 [BL79] vertical diffusivity 
EXCLUDE_PCELL_MIX_CODE  #undef  exclude code for partialcell effect (physical or enhanced) in vertical mixing; this allows accounting for partialcell in vertical viscosity and diffusion, either from gridspacing reduction effect or as artificially enhanced mixing near surface & bottom for too thin gridcell 
ALLOW_SOLVE4_PS_AND_DRAG  #undef  include code for combined surface pressure and drag implicit solver 
INCLUDE_IMPLVERTADV_CODE  #define  include code for implicit vertical advection 
ALLOW_ADAMSBASHFORTH_3  #undef  include code for AdamsBashforth 3rdorder 
EXACT_CONSERV  #define  include code for āexact conservationā of fluid in freesurface formulation (recompute divergence after pressure solver) 
NONLIN_FRSURF  #undef  allow the use of nonlinear freesurface formulation; implies that gridcell thickness (hFactors) varies with time 
ALLOW_NONHYDROSTATIC  #undef  include nonhydrostatic and 3D pressure solver codes 
ALLOW_EDDYPSI  #undef  include GMlike eddy stress in momentum code (untested, not recommended) 
ALLOW_CG2D_NSA  #undef  use nonselfadjoint (NSA) conjugategradient solver 
ALLOW_SRCG  #define  include code for single reduction conjugate gradient solver 
SOLVE_DIAGONAL_LOWMEMORY  #undef  low memory footprint (not suitable for AD) choice for implicit solver routines solve_*diagonal.F 
SOLVE_DIAGONAL_KINNER  #undef  choice for implicit solver routines solve_*diagonal.F suitable for AD 
COSINEMETH_III  #define  selects implementation form of \(\cos{\varphi}\) scaling of biharmonic term for viscosity (note, CPP option for tracer diffusivity set independently in GAD_OPTIONS.h) 
ISOTROPIC_COS_SCALING  #undef  selects isotropic scaling of harmonic and biharmonic viscous terms when using the \(\cos{\varphi}\) scaling (note, CPP option for tracer diffusivity set independently in GAD_OPTIONS.h) 
By default, MITgcm includes several core packages, i.e., these packages are enabled during
genmake2 execution if a file packages.conf
is not found.
See Section 8.1.1 for more information about packages.conf
, and see
pkg/pkg_groups for more information about default packages and package groups.
These default packages are as follows:
 pkg/mom_common
 pkg/mom_fluxform
 pkg/mom_vecinv
 pkg/generic_advdiff
 pkg/debug
 pkg/mdsio
 pkg/rw
 pkg/monitor
Additional CPP options that affect the model core code are set in files ${PKG}_OPTIONS.h
located in these packagesā directories. Similarly, optional (nondefault) packages
also include packagespecific CPP options that must be set in files ${PKG}_OPTIONS.h
.
The file eesupp/inc/CPP_EEOPTIONS.h does not contain any CPP options that typically will need to be modified by users.
Customizing the Model Configuration  Runtime ParametersĀ¶
When you are ready to run the model in the configuration you want, the
most straightforward approach is to use and adapt the setup of a tutorial or verification
experiment (described in Section 4) that is the closest to your
configuration. Then, the amount of setup will be minimized. In this
section, we document the complete list of MITgcm model namelist runtime
parameters set in file data
, which needs to be located in the
directory where you will run the model.
Model parameters are defined and
declared in the file PARAMS.h and their default values are
generally set in the routine set_defaults.F, otherwise
when initialized in the routine ini_parms.F. Section 3.8.9
documents the āexecution environmentā namelist parameters
in file eedata
, which must also reside in the current run directory.
Note that runtime parameters used by (nondefault) MITgcm packages are not documented here but rather in Section 8
and Section 9, and prescribed in packagespecific data.${pkg}
namelist files which are read in via
packagespecific ${pkg}_readparms.F
where ${pkg}
is the package name
(see Section 8.1.1).
In what follows, model parameters are grouped into categories related to
configuration/computational domain, algorithmic parameters, equations solved in the model, parameters related to model forcing, and
simulation controls. The tables below specify the namelist parameter name, the namelist parameter
group in data
(and eedata
in Section 3.8.9), the default value, and a short description of its function.
Runtime parameters that require nondefault CPP options to be set prior to compilation (see Section 3.7) for proper use are noted.
Parameters: Configuration, Computational Domain, Geometry, and TimeDiscretizationĀ¶
Model ConfigurationĀ¶
buoyancyRelation is
set to OCEANIC
by default, which employes a \(z\)coordinate vertical axis.
To simulate an ocean using pressure coordinates in the vertical, set it to OCEANICP
. For atmospheric simulations,
buoyancyRelation needs to be set to ATMOSPHERIC
, which also uses pressure as the vertical coordinate.
The default model configuration is hydrostatic; to run a nonhydrostatic simulation, set the logical
variable nonHydrostatic to .TRUE.
.
Parameter  Group  Default  Description 

buoyancyRelation  PARM01  OCEANIC  buoyancy relation (OCEANIC , OCEANICP , or ATMOSPHERIC ) 
quasiHydrostatic  PARM01  FALSE  quasihydrostatic formulation on/off flag 
rhoRefFile  PARM01  ' '  filename for reference density profile (kg/m^{3}); activates anelastic form of model 
nonHydrostatic  PARM01  FALSE  nonhydrostatic formulation on/off flag; requires #define ALLOW_NONHYDROSTATIC 
GridĀ¶
Four different grids are available: Cartesian, spherical polar, cylindrical, and curvilinear (which includes the cubed sphere). The grid is set through the logical variables usingCartesianGrid, usingSphericalPolarGrid, usingCylindricalGrid, and usingCurvilinearGrid. Note that the cylindrical grid is designed for modeling a rotating tank, so that \(x\) is the azimuthual direction, \(y\) is the radial direction, and \(r\) is vertical coordinate (see tutorial rotating tank).
The variable xgOrigin sets the position of the western most gridcell face in the \(x\) dimension (Cartesian, meters; spherical and cyclindrical, degrees). For a Cartesian or spherical grid, the southern boundary is defined through the variable ygOrigin which corresponds to the latitude of the southern most gridcell face (Cartesian, meters; spherical, degrees). For a cyclindrical grid, a positive ygOrigin (m) adds an inner cylindrical boundary at the center of the tank. The resolution along the \(x\) and \(y\) directions is controlled by the 1D arrays delX (meters for a Cartesian grid, degrees otherwise) and delY (meters for Cartesian and cyclindrical grids, degrees spherical). On a spherical polar grid, you might decide to set the variable cosPower which is set to 0 by default and which represents \(n\) in \((\cos\varphi)^n\), the power of cosine of latitude to multiply horizontal viscosity and tracer diffusivity. The vertical grid spacing is set through the 1D array delR (\(z\)coordinates: in meters; \(p\)coordinates, in Pa). Using a curvilinear grid requires complete specification of all horizontal MITgcm grid variables, either through a default filename (link to new doc section) or as specified by horizGridFile.
The variable seaLev_Z represents the standard position of sea level, in meters. This is typically set to 0 m for the ocean (default value). If instead pressure is used as the vertical coordinate, the pressure at the top (of the atmosphere or ocean) is set through top_Pres, typically 0 Pa. As such, these variables are analogous to xgOrigin and ygOrigin to define the vertical grid axis. But they also are used for a second purpose: in a \(z\)coordinate setup, top_Pres sets a reference top pressure (required in a nonlinear equation of state computation, for example); note that 1 bar (i.e., typical Earth atmospheric sealevel pressure) is added already, so the default is 0 Pa. Similarly, for a \(p\)coordinate setup, seaLev_Z is used to set a reference geopotential (after gravity scaling) at the top of the ocean or bottom of the atmosphere.
Parameter  Group  Default  Description 

usingCartesianGrid  PARM04  TRUE  use Cartesian grid/coordinates on/off flag 
usingSphericalPolarGrid  PARM04  FALSE  use spherical grid/coordinates on/off flag 
usingCylindricalGrid  PARM04  FALSE  use cylindrical grid/coordinates on/off flag 
usingCurvilinearGrid  PARM04  FALSE  use curvilinear grid/coordinates on/off flag 
xgOrigin  PARM04  0.0  west edge \(x\)axis origin (Cartesian: m; spherical and cyclindrical: degrees longitude) 
ygOrigin  PARM04  0.0  South edge \(y\)axis origin (Cartesian and cyclindrical: m; spherical: degrees latitude) 
dxSpacing  PARM04  unset  \(x\)axis uniform grid spacing, separation between cell faces (Cartesian: m; spherical and cyclindrical: degrees) 
delX  PARM04  dxSpacing  1D array of \(x\)axis grid spacing, separation between cell faces (Cartesian: m; spherical and cyclindrical: degrees) 
delXFile  PARM04  ' '  filename containing 1D array of \(x\)axis grid spacing 
dySpacing  PARM04  unset  \(y\)axis uniform grid spacing, separation between cell faces (Cartesian and cyclindrical: m; spherical: degrees) 
delY  PARM04  dySpacing  1D array of \(x\)axis grid spacing, separation between cell faces (Cartesian and cyclindrical: m; spherical: degrees) 
delYFile  PARM04  ' '  filename containing 1D array of \(y\)axis grid spacing 
cosPower  PARM01  0.0  power law \(n\) in \((\cos\varphi)^n\) factor for horizontal (harmonic or biharmonic) viscosity and tracer diffusivity (spherical polar) 
delR  PARM04  computed using delRc  vertical grid spacing 1D array ([\(r\)] unit) 
delRc  PARM04  computed using delR  vertical cell center spacing 1D array ([\(r\)] unit) 
delRFile  PARM04  ' '  filename for vertical grid spacing 1D array ([\(r\)] unit) 
delRcFile  PARM04  ' '  filename for vertical cell center spacing 1D array ([\(r\)] unit) 
rSphere  PARM04  6.37E+06  radius of sphere for spherical polar or curvilinear grid (m) 
seaLev_Z  PARM04  0.0  reference height of sea level (m) 
top_Pres  PARM04  0.0  top pressure (\(p\)coordinates) or top reference pressure (\(z\)coordinates) (Pa) 
selectFindRoSurf  PARM01  0  select method to determine surface reference pressure from orography (atmos.only) 
horizGridFile  PARM04  ' '  filename containing full set of horizontal grid variables (curvilinear) 
radius_fromHorizGrid  PARM04  rSphere  radius of sphere used in input curvilinear horizontal grid file (m) 
phiEuler  PARM04  0.0  Euler angle, rotation about original \(z\)axis (spherical polar) (degrees) 
thetaEuler  PARM04  0.0  Euler angle, rotation about new \(x\)axis (spherical polar) (degrees) 
psiEuler  PARM04  0.0  Euler angle, rotation about new \(z\)axis (spherical polar) (degrees) 
Topography  Full and Partial CellsĀ¶
For the ocean, the topography is read from a file that contains a 2D(\(x,y\)) map of bathymetry, in meters for \(z\)coordinates, in pascals for \(p\)coordinates. The bathymetry is specified by entering the vertical position of the ocean floor relative to the surface, so by convention in \(z\)coordinates bathymetry is specified as negative numbers (ādepthā is defined as positivedefinite) whereas in \(p\)coordinates bathymetry data is positive. The file name is represented by the variable bathyFile. See our introductory tutorial setup Section 4.1 for additional details on the file format. Note no changes are required in the model source code to represent enclosed, periodic, or double periodic domains: periodicity is assumed by default and is suppressed by setting the depths to 0 m for the cells at the limits of the computational domain.
To use the partial cell capability, the variable hFacMin needs to be set to a value between 0.0 and 1.0 (it is set to 1.0 by default) corresponding to the minimum fractional size of a gridcell. For example, if a gridcell is 500 m thick and hFacMin is set to 0.1, the minimum thickness for a āthincellā for this specific gridcell is 50 m. Thus, if the specified bathymetry depth were to fall exactly in the middle of this 500m thick gridcell, the initial model variable hFacC(\(x,y,r\)) would be set to 0.5. If the specified bathymetry depth fell within the top 50m of this gridcell (i.e., less than hFacMin), the model bathymetry would snap to the nearest legal value (i.e., initial hFacC(\(x,y,r\)) would be equal to 0.0 or 0.1 depending if the depth was within 025 m or 2550 m, respectively). Also note while specified bathymetry bottom depths (or pressures) need not coincide with the modelās levels as deduced from delR, any depth falling below the modelās defined vertical axis is truncated.
Parameter  Group  Default  Description 

bathyFile  PARM05  ' '  filename for 2D bathymetry (ocean) (\(z\)coor.: m, negative; \(p\)coor.: Pa, positive) 
topoFile  PARM05  ' '  filename for 2D surface topography (atmosphere) (m) 
addWwallFile  PARM05  ' '  filename for 2D western celledge āthinwallā 
addSwallFile  PARM05  ' '  filename for 2D southern celledge āthinwallā 
hFacMin  PARM01  1.0E+00  minimum fraction size of a cell 
hFacMinDr  PARM01  1.0E+00  minimum dimensional size of a cell ([\(r\)] unit) 
hFacInf  PARM01  2.0E01  lower threshold fraction for surface cell; for nonlinear free surface only, see parameter nonlinFreeSurf 
hFacSup  PARM01  2.0E+00  upper threshold fraction for surface cell; for nonlinear free surface, only see parameter nonlinFreeSurf 
useMin4hFacEdges  PARM04  FALSE  set hFacW, hFacS as minimum of adjacent hFacC on/off flag 
pCellMix_select  PARM04  0  option/factor to enhance mixing at the surface or bottom (0 99) 
pCellMix_maxFac  PARM04  1.0E+04  maximum enhanced mixing factor for too thin partialcell (nondim.) 
pCellMix_delR  PARM04  0.0  thickness criteria for too thin partialcell ([\(r\)] unit) 
Physical ConstantsĀ¶
Parameter  Group  Default  Description 

rhoConst  PARM01  rhoNil  vertically constant reference density (Boussinesq) (kg/m^{3}) 
gravity  PARM01  9.81E+00  gravitational acceleration (m/s^{2}) 
gravityFile  PARM01  ' '  filename for 1D gravity vertical profile (m/s^{2}) 
gBaro  PARM01  gravity  gravity constant in barotropic equation (m/s^{2}) 
RotationĀ¶
For a Cartesian or cylindical grid, the Coriolis parameter \(f\) is set through the variables f0 (in s^{ā1}) and beta (\(\frac{\partial f}{ \partial y}\); in m^{ā1}s^{ā1}), which corresponds to a Coriolis parameter \(f = f_o + \beta y\) (the socalled \(\beta\)plane).
Parameter  Group  Default  Description 

rotationPeriod  PARM01  8.6164E+04  rotation period (s) 
omega  PARM01  \(2\pi/\)rotationPeriod  angular velocity (rad/s) 
selectCoriMap  PARM01  depends on grid (Cartesian and cylindrical=1, spherical and curvilinear=2)  Coriolis map options

f0  PARM01  1.0E04  reference Coriolis parameter (Cartesian or cylindrical grid) (1/s) 
beta  PARM01  1.0E11  \(\beta\) (Cartesian or cylindrical grid) (m^{ā1}s^{ā1}) 
fPrime  PARM01  0.0  \(2 \Omega \cos{\phi}\) parameter (Cartesian or cylindical grid) (1/s); i.e., for \(\cos{\varphi}\) Coriolis terms from horizontal component of rotation vector (also sometimes referred to as reciprocal Coriolis parm.) 
Free SurfaceĀ¶
The logical variables rigidLid and implicitFreeSurface specify
your choice for ocean upper boundary (or lower boundary if using \(p\)coordinates);
set one to .TRUE.
and the other to .FALSE.
. These settings affect the calculations of surface pressure (for the ocean) or
surface geopotential (for the atmosphere); see Section 3.8.2.
Parameter  Group  Default  Description 

implicitFreeSurface  PARM01  TRUE  implicit free surface on/off flag 
rigidLid  PARM01  FALSE  rigid lid on/off flag 
useRealFreshWaterFlux  PARM01  FALSE  use true EPR freshwater flux (changes free surface/sea level) on/off flag 
implicSurfPress  PARM01  1.0E+00  implicit fraction of the surface pressure gradient (01) 
implicDiv2Dflow  PARM01  1.0E+00  implicit fraction of the barotropic flow divergence (01) 
implicitNHPress  PARM01  implicSurfPress  implicit fraction of the nonhydrostatic pressure gradient (01); for nonhydrostatic only, see parameter nonHydrostatic 
nonlinFreeSurf  PARM01  0  nonlinear free surface options (1,0,1,2,3; see Table 2.1); requires #define NONLIN_FRSURF 
select_rStar  PARM01  0  vertical coordinate option
see Table 2.1; requires #define NONLIN_FRSURF 
selectNHfreeSurf  PARM01  0  nonhydrostatic free surface formulation option
requires nonhydrostatic formulation, see parameter nonHydrostatic 
exactConserv  PARM01  FALSE  exact total volume conservation (recompute divergence after pressure solver) on/off flag 
TimeDiscretizationĀ¶
The time steps are set through the real variables deltaTMom and deltaTtracer (in seconds) which represent the time step for the momentum and tracer equations, respectively (or you can prescribe a single time step value for all parameters using deltaT). The model āclockā is defined by the variable deltaTClock (in seconds) which determines the I/O frequencies and is used in tagging output. Time in the model is thus computed as:
Parameter  Group  Default  Description 

deltaT  PARM03  0.0  default value used for model time step parameters (s) 
deltaTClock  PARM03  deltaT  timestep used for model clock (s): used for I/O frequency and tagging output and checkpoints 
deltaTmom  PARM03  deltaT  momentum equation timestep (s) 
deltaTtracer  PARM03  deltaT  tracer equation timestep (s) 
dTtracerLev  PARM03  deltaTtracer  tracer equation timestep specified at each vertical level (s) 
deltaTfreesurf  PARM03  deltaTmom  freesurface equation timestep (s) 
baseTime  PARM03  0.0  model base time corresponding to iteration 0 (s) 
Parameters: Main Algorithmic ParametersĀ¶
Pressure SolverĀ¶
By default, a hydrostatic simulation is assumed and a 2D elliptic equation is used to invert the pressure field. If using a nonhydrostatic configuration, the pressure field is inverted through a 3D elliptic equation (note this capability is not yet available for the atmosphere). The parameters controlling the behavior of the elliptic solvers are the variables cg2dMaxIters and cg2dTargetResidual for the 2D case and cg3dMaxIters and cg3dTargetResidual for the 3D case.
Parameter  Group  Default  Description 

cg2dMaxIters  PARM02  150  upper limit on 2D conjugate gradient solver iterations 
cg2dTargetResidual  PARM02  1.0E07  2D conjugate gradient target residual (nondim. due to RHS normalization ) 
cg2dTargetResWunit  PARM02  1.0E+00  2D conjugate gradient target residual (\(\dot{r}\) units); <0: use RHS normalization, i.e., cg2dTargetResidual instead 
cg2dPreCondFreq  PARM02  1  frequency (in number of iterations) for updating cg2d preconditioner; for nonlinear free surface only, see parameter nonlinFreeSurf 
cg2dUseMinResSol  PARM02  0 unless flatbottom, Cartesian 

cg3dMaxIters  PARM02  150  upper limit on 3D conjugate gradient solver iterations; requires #define ALLOW_NONHYDROSTATIC 
cg3dTargetResidual  PARM02  1.0E07  3D conjugate gradient target residual (nondim. due to RHS normalization ); requires #define ALLOW_NONHYDROSTATIC 
useSRCGSolver  PARM02  FALSE  use conjugate gradient solver with single reduction (single call of mpi_allreduce) 
printResidualFreq  PARM02  1 unless debugLevel >4  frequency (in number of iterations) of printing conjugate gradient residual 
integr_GeoPot  PARM01  2  select method to integrate geopotential

uniformLin_PhiSurf  PARM01  TRUE  use uniform \(b_s\) relation for \(\phi_s\) on/off flag 
deepAtmosphere  PARM04  FALSE  donāt make the thin shell/shallow water approximation 
nh_Am2  PARM01  1.0E+00  nonhydrostatic terms scaling factor; requires #define ALLOW_NONHYDROSTATIC 
TimeStepping AlgorithmĀ¶
The AdamsBashforth stabilizing parameter is set through the
variable abEps (dimensionless). The stagger baroclinic time
stepping algorithm can be activated by setting the logical variable
staggerTimeStep to .TRUE.
.
Parameter  Group  Default  Description 

abEps  PARM03  1.0E02  AdamsBashforth2 stabilizing weight (nondim.) 
alph_AB  PARM03  0.5E+00  AdamsBashforth3 primary factor (nondim.); requires #define ALLOW_ADAMSBASHFORTH_3 
beta_AB  PARM03  5/12  AdamsBashforth3 secondary factor (nondim.); requires #define ALLOW_ADAMSBASHFORTH_3 
staggerTimeStep  PARM01  FALSE  use staggered time stepping (thermodynamic vs. flow variables) on/off flag 
multiDimAdvection  PARM01  TRUE  use multidim. advection algorithm in schemes where non multidim. is possible on/off flag 
implicitIntGravWave  PARM01  FALSE  treat internal gravity waves implicitly on/off flag; requires #define ALLOW_NONHYDROSTATIC 
Parameters: Equation of StateĀ¶
The form of the equation of state is controlled by the model configuration and eosType.
For the atmosphere, eosType must be set to IDEALGAS
.
For the ocean, several forms of the equation of state are available:
For a linear approximation, set eosType to
LINEAR
), and you will need to specify the thermal and haline expansion coefficients, represented by the variables tAlpha (in K^{ā1}) and sBeta (in psu^{ā1}). Because the model equations are written in terms of perturbations, a reference thermodynamic state needs to be specified. This is done through the 1D arrays tRef and sRef. tRef specifies the reference potential temperature profile (in ^{o}C for the ocean and K for the atmosphere) starting from the level k=1. Similarly, sRef specifies the reference salinity profile (in psu or g/kg) for the ocean or the reference specific humidity profile (in g/kg) for the atmosphere.MITgcm offers several approximations to the full (oceanic) nonlinear equation of state that can be selected as eosType:
'POLYNOMIAL'
:This approximation is based on the Knudsen formula (see Bryan and Cox 1972 [BC72]). For this option you need to generate a file of polynomial coefficients called
POLY3.COEFFS
. To do this, use the program utils/knudsen2/knudsen2.f under the model tree (aMakefile
is available in the same directory; you will need to edit the number and the values of the vertical levels in knudsen2.f so that they match those of your configuration).āUNESCOā
:The UNESCO equation of state formula (IES80) of Fofonoff and Millard (1983) [FRM83]. This equation of state assumes insitu temperature, which is not a model variable; its use is therefore discouraged.
āJMD95Zā
:A modified UNESCO formula by Jackett and McDougall (1995) [JM95], which uses the model variable potential temperature as input. The āZā indicates that this equation of state uses a horizontally and temporally constant pressure \(p_{0}=g\rho_{0}z\).
āJMD95Pā
:A modified UNESCO formula by Jackett and McDougall (1995) [JM95], which uses the model variable potential temperature as input. The āPā indicates that this equation of state uses the actual hydrostatic pressure of the last time step. Lagging the pressure in this way requires an additional pickup file for restarts.
āMDJWFā
:A more accurate and less expensive equation of state than UNESCO by McDougall et al. (2003) [MJWF03], also using the model variable potential temperature as input. It also requires lagging the pressure and therefore an additional pickup file for restarts.
āTEOS10ā
:TEOS10 is based on a Gibbs function formulation from which all thermodynamic properties of seawater (density, enthalpy, entropy sound speed, etc.) can be derived in a thermodynamically consistent manner; see http://www.teos10.org. See IOC et al. (2010) [ISI10], McDougall and Parker (2011) [MB11], and Roquet et al. (2015) [RMMB15] for implementation details. It also requires lagging the pressure and therefore an additional pickup file for restarts. Note at this time a full implementation of TEOS10 (i.e. ocean variables of conservative temperature and practical salinity, including consideration of surface forcings) has not been implemented; also note the original 48term polynomial term is used, not the newer, preferred 75term polynomial.
For these nonlinear approximations, neither a reference profile of temperature or salinity is required, except for a setup where implicitIntGravWave is set to
.TRUE.
or selectP_inEOS_Zc=1.
Note that salinity can can be expressed in either practical salinity units (psu, i.e., unitless) or g/kg, depending on the choice of equation of state. See Millero (2010) [Mil10] for a detailed discussion of salinity measurements, and why use of the latter is preferred, in the context of the ocean equation of state.
Parameter  Group  Default  Description 

eosType  PARM01  LINEAR  equation of state form 
tRef  PARM01  20.0 ^{o}C (ocn) or 300.0 K (atm)  1D vertical reference temperature profile (^{o}C or K) 
tRefFile  PARM01  ' '  filename for reference temperature profile (^{o}C or K) 
thetaConst  PARM01  tRef(k=1)  vertically constant reference temp. for atmosphere \(p^*\) coordinates (^{o}K); for ocean, specify instead of tRef or tRefFile for vertically constant reference temp. (^{o}C ) 
sRef  PARM01  30.0 psu (ocn) or 0.0 (atm)  1D vertical reference salinity profile (psu or g/kg) 
sRefFile  PARM01  ' '  filename for reference salinity profile (psu or g/kg) 
selectP_inEOS_Zc  PARM01  depends on eosType  select which pressure to use in EOS for \(z\)coor.
for 
rhonil  PARM01  9.998E+02  reference density for linear EOS (kg/m^{3}) 
tAlpha  PARM01  2.0E04  linear EOS thermal expansion coefficient (1/^{o}C) 
sBeta  PARM01  7.4E04  linear EOS haline contraction coefficient (1/psu) 
Thermodynamic ConstantsĀ¶
Parameter  Group  Default  Description 

HeatCapacity_Cp  PARM01  3.994E+03  specific heat capacity C_{p} (ocean) (J/kg/K) 
celsius2K  PARM01  2.7315E+02  conversion constant ^{o}C to Kelvin 
atm_Cp  PARM01  1.004E+03  specific heat capacity C_{p} dry air at const. press. (J/kg/K) 
atm_Rd  PARM01  atm_Cp*(2/7)  gas constant for dry air (J/kg/K) 
atm_Rq  PARM01  0.0  water vapor specific volume anomaly relative to dry air (g/kg) 
atm_Po  PARM01  1.0E+05  atmosphere standard reference pressure (for potential temp. defn.) (Pa) 
Parameters: Momentum EquationsĀ¶
ConfigurationĀ¶
There are a few logical variables that allow you to turn on/off various
terms in the momentum equation. These variables are called
momViscosity, momAdvection, useCoriolis,
momStepping, metricTerms, and momPressureForcing and by default are
set to .TRUE.
. Vertical diffusive fluxes of momentum can be computed implicitly
by setting the logical variable implicitViscosity to
.TRUE.
. The details relevant to both the momentum fluxform and the vectorinvariant form of the
equations and the various (momentum) advection schemes are covered in Section 2.
Parameter  Group  Default  Description 

momStepping  PARM01  TRUE  momentum equation timestepping on/off flag 
momViscosity  PARM01  TRUE  momentum friction terms on/off flag 
momAdvection  PARM01  TRUE  advection of momentum on/off flag 
momPressureForcing  PARM01  TRUE  pressure term in momentum equation on/off flag 
metricTerms  PARM01  TRUE  include metric terms (spherical polar, momentum fluxform) on/off flag 
useNHMTerms  PARM01  FALSE  use ānonhydrostatic formā of metric terms on/off flag; (see Section 2.14.4; note these terms are nonzero in many model configurations beside nonhydrostatic) 
momImplVertAdv  PARM01  FALSE  momentum implicit vertical advection on/off flag; requires #define INCLUDE_IMPLVERTADV_CODE 
implicitViscosity  PARM01  FALSE  implicit vertical viscosity on/off flag 
interViscAr_pCell  PARM04  FALSE  account for partialcell in interior vertical viscosity on/off flag 
momDissip_In_AB  PARM03  TRUE  use AdamsBashforth time stepping for dissipation tendency 
useCoriolis  PARM01  TRUE  include Coriolis terms on/off flag 
use3dCoriolis  PARM01  TRUE  include \(\cos{\varphi}\) Coriolis terms on/off flag 
selectCoriScheme  PARM01  0  Coriolis scheme selector

vectorInvariantMomentum  PARM01  FALSE  use vectorinvariant form of momentum equations flag 
useJamartMomAdv  PARM01  FALSE  use Jamart wetpoints method for relative vorticity advection (vector invariant form) on/off flag 
selectVortScheme  PARM01  1  vorticity scheme (vector invariant form) options
see Sadourny 1975 [Sad75] and Burridge & Haseler 1977 [BH77] 
upwindVorticity  PARM01  FALSE  bias interpolation of vorticity in the Coriolis term (vector invariant form) on/off flag 
useAbsVorticity  PARM01  FALSE  use \(f + \zeta\) in Coriolis terms (vector invariant form) on/off flag 
highOrderVorticity  PARM01  FALSE  use 3rd/4th order interpolation of vorticity (vector invariant form) on/off flag 
upwindShear  PARM01  FALSE  use 1st order upwind for vertical advection (vector invariant form) on/off flag 
selectKEscheme  PARM01  0  kinetic energy computation in Bernoulli function (vector invariant form) options
see mom_calc_ke.F 
InitializationĀ¶
The initial horizontal velocity components can be specified from binary files uVelInitFile and vVelInitFile. These files should contain 3D data ordered in an (\(x,y,r\)) fashion with k=1 as the first vertical level (surface level). If no file names are provided, the velocity is initialized to zero. The initial vertical velocity is always derived from the horizontal velocity using the continuity equation. In the case of a restart (from the end of a previous simulation), the velocity field is read from a pickup file (see Section 3.8.7) and the initial velocity files are ignored.
Parameter  Group  Default  Description 

uVelInitFile  PARM05  ' '  filename for 3D specification of initial zonal velocity field (m/s) 
vVelInitFile  PARM05  ' '  filename for 3D specification of initial meridional velocity field (m/s) 
pSurfInitFile  PARM05  ' '  filename for 2D specification of initial free surface position ([\(r\)] unit) 
General Dissipation SchemeĀ¶
The lateral eddy viscosity coefficient is specified through the variable viscAh (in m^{2}s^{ā1}). The vertical eddy viscosity coefficient is specified through the variable viscAr (in [\(r\)]^{2}s^{ā1}, where [\(r\)] is the dimension of the vertical coordinate). In addition, biharmonic mixing can be added as well through the variable viscA4 (in m^{4}s^{ā1}).
Parameter  Group  Default  Description 

viscAh  PARM01  0.0  lateral eddy viscosity (m^{2}/s) 
viscAhD  PARM01  viscAh  lateral eddy viscosity acts on divergence part (m^{2}/s) 
viscAhZ  PARM01  viscAh  lateral eddy viscosity acts on vorticity part (\(\zeta\) points) (m^{2}/s) 
viscAhW  PARM01  viscAhD  lateral eddy viscosity for mixing vertical momentum (nonhydrostatic form) (m^{2}/s); for nonhydrostatic only, see parameter nonHydrostatic 
viscAhDfile  PARM05  ' '  filename for 3D specification of lateral eddy viscosity (divergence part) (m^{2}/s); requires #define ALLOW_3D_VISCAH in pkg/mom_common/MOM_COMMON_OPTIONS.h 
viscAhZfile  PARM05  ' '  filename for 3D specification of lateral eddy viscosity (vorticity part, \(\zeta\) points); requires #define ALLOW_3D_VISCAH in pkg/mom_common/MOM_COMMON_OPTIONS.h 
viscAhGrid  PARM01  0.0  griddependent lateral eddy viscosity (nondim.) 
viscAhMax  PARM01  1.0E+21  maximum lateral eddy viscosity (m^{2}/s) 
viscAhGridMax  PARM01  1.0E+21  maximum lateral eddy (griddependent) viscosity (nondim.) 
viscAhGridMin  PARM01  0.0  minimum lateral eddy (griddependent) viscosity (nondim.) 
viscAhReMax  PARM01  0.0  minimum lateral eddy viscosity based on Reynolds number (nondim.) 
viscC2leith  PARM01  0.0  Leith harmonic viscosity factor (vorticity part, \(\zeta\) points) (nondim.) 
viscC2leithD  PARM01  0.0  Leith harmonic viscosity factor (divergence part) (nondim.) 
viscC2LeithQG  PARM01  0.0  Quasigeostrophic Leith viscosity factor (nondim.) 
viscC2smag  PARM01  0.0  Smagorinsky harmonic viscosity factor (nondim.) 
viscA4  PARM01  0.0  lateral biharmonic viscosity (m^{4}/s) 
viscA4D  PARM01  viscA4  lateral biharmonic viscosity (divergence part) (m^{4}/s) 
viscA4Z  PARM01  viscA4  lateral biharmonic viscosity (vorticity part, \(\zeta\) points) (m^{4}/s) 
viscA4W  PARM01  viscA4D  lateral biharmonic viscosity for mixing vertical momentum (nonhydrostatic form) (m^{4}/s); for nonhydrostatic only, see parameter nonHydrostatic 
viscA4Dfile  PARM05  ' '  filename for 3D specification of lateral biharmonic viscosity (divergence part) (m^{4}/s); requires #define ALLOW_3D_VISCA4 in pkg/mom_common/MOM_COMMON_OPTIONS.h 
viscA4Zfile  PARM05  ' '  filename for 3D specification of lateral biharmonic viscosity (vorticity part, \(\zeta\) points); requires #define ALLOW_3D_VISCA4 in pkg/mom_common/MOM_COMMON_OPTIONS.h 
viscA4Grid  PARM01  0.0  grid dependent biharmonic viscosity (nondim.) 
viscA4Max  PARM01  1.0E+21  maximum biharmonic viscosity (m^{4}/s) 
viscA4GridMax  PARM01  1.0E+21  maximum biharmonic (griddependent) viscosity (nondim.) 
viscA4GridMin  PARM01  0.0  minimum biharmonic (griddependent) viscosity (mondim.) 
viscA4ReMax  PARM01  0.0  minimum biharmonic viscosity based on Reynolds number (nondim.) 
viscC4leith  PARM01  0.0  Leith biharmonic viscosity factor (vorticity part, \(\zeta\) points) (nondim.) 
viscC4leithD  PARM01  0.0  Leith biharmonic viscosity factor (divergence part) (nondim.) 
viscC4smag  PARM01  0.0  Smagorinsky biharmonic viscosity factor (nondim.) 
useFullLeith  PARM01  FALSE  use full form of Leith viscosities on/off flag 
useSmag3D  PARM01  FALSE  use isotropic 3D Smagorinsky harmonic viscosities flag; requires #define ALLOW_SMAG_3D in pkg/mom_common/MOM_COMMON_OPTIONS.h 
smag3D_coeff  PARM01  1.0E02  isotropic 3D Smagorinsky coefficient (nondim.); requires #define ALLOW_SMAG_3D in pkg/mom_common/MOM_COMMON_OPTIONS.h 
useStrainTensionVisc  PARM01  FALSE  flag to use straintension form of viscous operator 
useAreaViscLength  PARM01  FALSE  flag to use area for viscous \(L^2\) instead of harmonic mean of \({L_x}^2, {L_y}^2\) 
viscAr  PARM01  0.0  vertical eddy viscosity ([\(r\)]^{2}/s) 
viscArNr  PARM01  0.0  vertical profile of vertical eddy viscosity ([\(r\)]^{2}/s) 
pCellMix_viscAr  PARM04  viscArNr  vertical viscosity for too thin partialcell ([\(r\)]^{2}/s) 
Sidewall/Bottom DissipationĀ¶
Slip or noslip conditions at lateral and bottom
boundaries are specified through the logical variables
no_slip_sides and no_slip_bottom. If set to
.FALSE.
, freeslip boundary conditions are applied. If noslip
boundary conditions are applied at the bottom, a bottom drag can be
applied as well. Two forms are available: linear (set the variable
bottomDragLinear in [\(r\)]/s, )
and quadratic (set the variable
bottomDragQuadratic, [\(r\)]/m).
Parameter  Group  Default  Description 

no_slip_sides  PARM01  TRUE  viscous BCs: noslip sides on/off flag 
sideDragFactor  PARM01  2.0E+00  sidedrag scaling factor (2.0: full drag) (nondim.) 
no_slip_bottom  PARM01  TRUE  viscous BCs: noslip bottom on/off flag 
bottomDragLinear  PARM01  0.0  linear bottomdrag coefficient ([\(r\)]/s) 
bottomDragQuadratic  PARM01  0.0  quadratic bottomdrag coefficient ([\(r\)]/m) 
selectBotDragQuadr  PARM01  1  select quadratic bottom drag discretization option
if bottomDragQuadratic \(\neq 0.\) then default is 0 
selectImplicitDrag  PARM01  0  top/bottom drag implicit treatment options
if =2, requires #define ALLOW_SOLVE4_PS_AND_DRAG 
bottomVisc_pCell  PARM01  FALSE  account for partialcell in bottom viscosity (using no_slip_bottom = .TRUE. ) on/off flag 
Parameters: Tracer EquationsĀ¶
This section covers the tracer equations, i.e., the potential temperature equation and the salinity (for the ocean) or specific humidity (for the atmosphere) equation.
ConfigurationĀ¶
The logical variables tempAdvection, and
tempStepping allow you to turn on/off terms in the temperature
equation (similarly for salinity or specific humidity with variables
saltAdvection etc.). These variables all
default to a value of .TRUE.
. The vertical diffusive
fluxes can be computed implicitly by setting the logical variable
implicitDiffusion to .TRUE.
.
Parameter  Group  Default  Description 

tempStepping  PARM01  TRUE  temperature equation timestepping on/off flag 
tempAdvection  PARM01  TRUE  advection of temperature on/off flag 
tempAdvScheme  PARM01  2  temperature horizontal advection scheme selector (see Table 2.2) 
tempVertAdvScheme  PARM01  tempAdvScheme  temperature vertical advection scheme selector (see Table 2.2) 
tempImplVertAdv  PARM01  FALSE  temperature implicit vertical advection on/off flag 
addFrictionHeating  PARM01  FALSE  include frictional heating in temperature equation on/off flag; requires #define ALLOW_FRICTION_HEATING 
temp_stayPositive  PARM01  FALSE  use Smolarkiewicz hack to ensure temperature stays positive on/off flag; requires #define GAD_SMOLARKIEWICZ_HACK in pkg/generic_advdiff/GAD_OPTIONS.h 
saltStepping  PARM01  TRUE  salinity equation timestepping on/off flag 
saltAdvection  PARM01  TRUE  advection of salinity on/off flag 
saltAdvScheme  PARM01  2  salinity horizontal advection scheme selector (see Table 2.2) 
saltVertAdvScheme  PARM01  saltAdvScheme  salinity vertical advection scheme selector (see Table 2.2) 
saltImplVertAdv  PARM01  FALSE  salinity implicit vertical advection on/off flag 
salt_stayPositive  PARM01  FALSE  use Smolarkiewicz hack to ensure salinity stays positive on/off flag; requires #define GAD_SMOLARKIEWICZ_HACK in pkg/generic_advdiff/GAD_OPTIONS.h 
implicitDiffusion  PARM01  FALSE  implicit vertical diffusion on/off flag 
interDiffKr_pCell  PARM04  FALSE  account for partialcell in interior vertical diffusion on/off flag 
linFSConserveTr  PARM01  TRUE  correct source/sink of tracer due to use of linear free surface on/off flag 
doAB_onGtGs  PARM03  TRUE  apply AdamsBashforth on tendencies (rather than on T,S) on/off flag 
InitializationĀ¶
The initial tracer data can be contained in the binary files hydrogThetaFile and hydrogSaltFile. These files should contain 3D data ordered in an (\(x,y,r\)) fashion with k=1 as the first vertical level. If no file names are provided, the tracers are then initialized with the values of tRef and sRef discussed in Section 3.8.3. In this case, the initial tracer data are uniform in \(x\) and \(y\) for each depth level.
Parameter  Group  Default  Description 

hydrogThetaFile  PARM05  ' '  filename for 3D specification of initial potential temperature (^{o}C) 
hydrogSaltFile  PARM05  ' '  filename for 3D specification of initial salinity (psu or g/kg) 
maskIniTemp  PARM05  TRUE  apply (centerpoint) mask to initial hydrographic theta data on/off flag 
maskIniSalt  PARM05  TRUE  apply (centerpoint) mask to initial hydrographic salinity on/off flag 
checkIniTemp  PARM05  TRUE  check if initial theta (at wetpoint) identically zero on/off flag 
checkIniSalt  PARM05  TRUE  check if initial salinity (at wetpoint) identically zero on/off flag 
Tracer DiffusivitiesĀ¶
Lateral eddy diffusivities for temperature and salinity/specific humidity are specified through the variables diffKhT and diffKhS (in m^{2}/s). Vertical eddy diffusivities are specified through the variables diffKrT and diffKrS.In addition, biharmonic diffusivities can be specified as well through the coefficients diffK4T and diffK4S (in m^{4}/s). The Gent and McWilliams parameterization for advection and mixing of oceanic tracers is described in Section 8.4.1.
Parameter  Group  Default  Description 

diffKhT  PARM01  0.0  Laplacian diffusivity of heat laterally (m^{2}/s) 
diffK4T  PARM01  0.0  biharmonic diffusivity of heat laterally (m^{4}/s) 
diffKrT  PARM01  0.0  Laplacian diffusivity of heat vertically (m^{2}/s) 
diffKr4T  PARM01  0.0  biharmonic diffusivity of heat vertically (m^{2}/s) 
diffKrNrT  PARM01  0.0 at k=top  vertical profile of vertical diffusivity of temperature (m^{2}/s) 
pCellMix_diffKr  PARM04  diffKrNr  vertical diffusivity for too thin partialcell ([r]^{2}/s) 
diffKhS  PARM01  0.0  Laplacian diffusivity of salt laterally (m^{2}/s) 
diffK4S  PARM01  0.0  biharmonic diffusivity of salt laterally (m^{4}/s) 
diffKrS  PARM01  0.0  Laplacian diffusivity of salt vertically (m^{2}/s) 
diffKr4S  PARM01  0.0  biharmonic diffusivity of salt vertically (m^{2}/s) 
diffKrNrS  PARM01  0.0 at k=top  vertical profile of vertical diffusivity of salt (m^{2}/s) 
diffKrFile  PARM05  ' '  filename for 3D specification of vertical diffusivity (m^{2}/s); requires #define ALLOW_3D_DIFFKR 
diffKrBL79surf  PARM01  0.0  surface diffusivity for Bryan & Lewis 1979 [BL79] (m^{2}/s) 
diffKrBL79deep  PARM01  0.0  deep diffusivity for Bryan & Lewis 1979 [BL79] (m^{2}/s) 
diffKrBL79scl  PARM01  2.0E+02  depth scale for Bryan & Lewis 1979 [BL79] (m) 
diffKrBL79Ho  PARM01  2.0E+03  turning depth for Bryan & Lewis 1979 [BL79] (m) 
diffKrBLEQsurf  PARM01  0.0  same as diffKrBL79surf but at equator; requires #define ALLOW_BL79_LAT_VARY 
diffKrBLEQdeep  PARM01  0.0  same as diffKrBL79deep but at equator; requires #define ALLOW_BL79_LAT_VARY 
diffKrBLEQscl  PARM01  2.0E+02  same as diffKrBL79scl but at equator; requires #define ALLOW_BL79_LAT_VARY 
diffKrBLEQHo  PARM01  2.0E+03  same as diffKrBL79Ho but at equator; requires #define ALLOW_BL79_LAT_VARY 
BL79LatVary  PARM01  3.0E+01  transition from diffKrBLEQ to diffKrBL79 parms at this latitude; requires #define ALLOW_BL79_LAT_VARY 
Ocean ConvectionĀ¶
In addition to specific packages that parameterize ocean convection, two main model options are available.
To use the first option, a convective adjustment scheme, you need to
set the variable cadjFreq, the frequency (in seconds)
with which the adjustment algorithm is called, to a nonzero value
(note, if cadjFreq set to a negative value by the user, the model will set it to
the model clock time step). The second option is to parameterize
convection with implicit vertical diffusion. To do this, set the
logical variable implicitDiffusion to .TRUE.
and the real
variable ivdc_kappa (in m^{2}/s) to
an appropriate tracer vertical diffusivity value for mixing
due to static instabilities (typically, several orders of magnitude above the background vertical diffusivity).
Note that cadjFreq and
ivdc_kappa cannot both have nonzero value.
Parameter  Group  Default  Description 

ivdc_kappa  PARM01  0.0  implicit vertical diffusivity for convection (m^{2}/s) 
cAdjFreq  PARM03  0  frequency of convective adj. scheme; <0: sets value to deltaTclock (s) 
hMixCriteria  PARM01  0.8E+00 

hMixSmooth  PARM01  0.0  use this fraction of neighboring points (for smoothing) in ML calculation (01; 0: no smoothing) 
Parameters: Model ForcingĀ¶
The forcing options that can be prescribed through runtime parameters in data
are easy to use but somewhat limited in scope.
More complex forcing setups are possible with optional packages such as pkg/exf or pkg/rbcs, in which case
most or all of the parameters in this section can simply be left at their default value.
Momentum ForcingĀ¶
This section only applies to the ocean. You need to generate
windstress data into two files zonalWindFile and
meridWindFile corresponding to the zonal and meridional
components of the wind stress, respectively (if you want the stress
to be along the direction of only one of the model horizontal axes,
you only need to generate one file). The format of the files is
similar to the bathymetry file. The zonal (meridional) stress data
are assumed to be in pascals and located at Upoints (Vpoints). See the matlab
program gendata.m
in the input
directories of
verification
for several tutorial example
(e.g. gendata.m
in the barotropic gyre tutorial)
to see how simple analytical wind forcing data are generated for the
case study experiments.
Parameter  Group  Default  Description 

momForcing  PARM01  TRUE  included external forcing of momentum on/off flag 
zonalWindFile  PARM05  ' '  filename for 2D specification of zonal component of wind forcing (N/m^{2}) 
meridWindFile  PARM05  ' '  filename for 2D specification of meridional component of wind forcing (N/m^{2}) 
momForcingOutAB  PARM03  0  1: take momentum forcing out of AdamsBashforth time stepping 
momTidalForcing  PARM01  TRUE  tidal forcing of momentum equation on/off flag (requires tidal forcing files) 
ploadFile  PARM05  ' '  filename for 2D specification of atmospheric pressure loading (ocean \(z\)coor. only) (Pa) 
Tracer ForcingĀ¶
A combination of flux data and relaxation terms can be used for driving the tracer equations. For potential temperature, heat flux data (in W/m^{2}) can be stored in the 2D binary file surfQnetfile. Alternatively or in addition, the forcing can be specified through a relaxation term. The SST data to which the model surface temperatures are restored are stored in the 2D binary file thetaClimFile. The corresponding relaxation time scale coefficient is set through the variable tauThetaClimRelax (in seconds). The same procedure applies for salinity with the variable names EmPmRfile, saltClimFile, and tauSaltClimRelax for freshwater flux (in m/s) and surface salinity (in psu or g/kg) data files and relaxation timescale coefficient (in seconds), respectively.
Parameter  Group  Default  Description 

tempForcing  PARM01  TRUE  external forcing of temperature forcing on/off flag 
surfQnetFile  PARM05  ' '  filename for 2D specification of net total heat flux (W/m^{2}) 
surfQswFile  PARM05  ' '  filename for 2D specification of net shortwave flux (W/m^{2}); requires #define SHORTWAVE_HEATING 
tauThetaClimRelax  PARM03  0.0  temperature (surface) relaxation time scale (s) 
lambdaThetaFile  PARM05  ' '  filename for 2D specification of inverse temperature (surface) relaxation time scale (1/s) 
ThetaClimFile  PARM05  ' '  filename for specification of (surface) temperature relaxation values (^{o}C) 
balanceThetaClimRelax  PARM01  FALSE  subtract global mean heat flux due to temp. relaxation flux every time step on/off flag; requires #define ALLOW_BALANCE_RELAX 
balanceQnet  PARM01  FALSE  subtract global mean Qnet every time step on/off flag; requires #define ALLOW_BALANCE_FLUXES 
geothermalFile  PARM05  ' '  filename for 2D specification of geothermal heating flux through bottom (W/m^{2}); requires #define ALLOW_GEOTHERMAL_FLUX 
temp_EvPrRn  PARM01  UNSET  temperature of rain and evaporated water (unset, use local temp.) (^{o}C) 
allowFreezing  PARM01  FALSE  limit (ocean) temperature at surface to >= 1.9^{o}C 
saltForcing  PARM01  TRUE  external forcing of salinity forcing on/off flag 
convertFW2Salt  PARM01  3.5E+01  salinity used to convert freshwater flux to salt flux (1: use local S) (psu or g/kg)
(note default is 1 if useRealFreshWaterFlux= .TRUE. ) 
rhoConstFresh  PARM01  rhoConst  constant reference density for fresh water (rain) (kg/m^{3}) 
EmPmRFile  PARM05  ' '  filename for 2D specification of net freshwater flux (m/s) 
saltFluxFile  PARM05  ' '  filename for 2D specification of salt flux (from seaice) (psu.kg/m^{2}/s) 
tauSaltClimRelax  PARM03  0.0  salinity (surface) relaxation time scale (s) 
lambdaSaltFile  PARM05  ' '  filename for 2D specification of inverse salinity (surface) relaxation time scale (1/s) 
saltClimFile  PARM05  ' '  filename for specification of (surface) salinity relaxation values (psu or g/kg) 
balanceSaltClimRelax  PARM01  FALSE  subtract global mean flux due to salt relaxation every time step on/off flag 
balanceEmPmR  PARM01  FALSE  subtract global mean EmPmR every time step on/off flag; requires #define ALLOW_BALANCE_FLUXES 
salt_EvPrRn  PARM01  0.0  salinity of rain and evaporated water (psu or g/kg) 
selectAddFluid  PARM01  0  add fluid to ocean interior options (1, 0: off, or 1); requires #define ALLOW_ADDFLUID 
temp_addMass  PARM01  temp_EvPrRn  temp. of added or removed (interior) water (^{o}C); requires #define ALLOW_ADDFLUID 
salt_addMass  PARM01  salt_EvPrRn  salinity of added or removed (interior) water (^{o}C); requires #define ALLOW_ADDFLUID 
addMassFile  PARM05  ' '  filename for 3D specification of mass source/sink (+=source, kg/s); requires #define ALLOW_ADDFLUID 
balancePrintMean  PARM01  FALSE  print subtracted balancing means to STDOUT on/off flag; requires #define ALLOW_BALANCE_FLUXES and/or #define ALLOW_BALANCE_RELAX 
latBandClimRelax  PARM03  whole domain  relaxation to (T,S) climatology equatorward of this latitude band is applied 
tracForcingOutAB  PARM03  0  1: take T, S, and pTracer forcing out of AdamsBashforth time stepping 
Periodic ForcingĀ¶
To prescribe timedependent periodic
forcing, concatenate successive time records into a
single file ordered in a (\(x,y\),time) fashion
and set the following variables: periodicExternalForcing to
.TRUE.
, externForcingPeriod to the period (in seconds between two records in input files) with which
the forcing varies (e.g., 1 month), and externForcingCycle
to the repeat time (in seconds) of the forcing (e.g., 1 year; note
externForcingCycle must be a multiple of
externForcingPeriod). With these variables specified, the model
will interpolate the forcing linearly at each iteration.
Parameter  Group  Default  Description 

periodicExternalForcing  PARM03  FALSE  allow timedependent periodic forcing on/off flag 
externForcingPeriod  PARM03  0.0  period over which forcing varies (e.g. monthly) (s) 
externForcingCycle  PARM03  0.0  period over which the forcing cycle repeats (e.g. one year) (s) 
Parameters: Simulation ControlsĀ¶
Run Start and DurationĀ¶
The beginning of a simulation is set by specifying a start time (in seconds)
through the real variable startTime or by specifying an
initial iteration number through the integer variable nIter0. If
these variables are set to nonzero values, the model will look for a
āpickupā file (by default, pickup.0000nIter0
) to restart the integration. The
end of a simulation is set through the real variable endTime (in seconds).
Alternatively, one can instead specify the number of time steps
to execute through the integer variable nTimeSteps.
Iterations are referenced to deltaTClock, i.e., each iteration is deltaTClock seconds of model time.
Parameter  Group  Default  Description 

nIter0  PARM03  0  starting timestep iteration number for this integration 
nTimeSteps  PARM03  0  number of (model clock) timesteps to execute 
nEndIter  PARM03  0  run ending timestep iteration number (alternate way to prescribe nTimeSteps) 
startTime  PARM03  baseTime  run start time for this integration (s) (alternate way to prescribe nIter0) 
endTime  PARM03  0.0  run ending time (s) (with startTime, alternate way to prescribe nTimeSteps) 
Input/Output FilesĀ¶
The precision with which to read binary data is
controlled by the integer variable readBinaryPrec, which can take
the value 32 (single precision) or 64 (double precision). Similarly, the precision with which
to write binary data is controlled by the integer variable writeBinaryPrec.
By default, MITgcm writes output (snapshots, diagnostics, and pickups) separately for individual tiles,
leaving it to the user to reassemble these into global files, if needed (scripts are available in utils/).
There are two options however to have the model do this for you. Setting globalFiles to .TRUE.
should always work in a single process setup (including multithreaded processes),
but for MPI runs this will depend on the platform ā it requires simultaneous write access to a common file
(permissible in typical Lustre setups, but not on all file systems).
Alternatively, one can set useSingleCpuIO
to .TRUE.
to generate global files, which should always work, but requires additional mpipassing of data and may result in slower execution.
Parameter  Group  Default  Description 

globalFiles  PARM01  FALSE  write output āglobalā (i.e. not per tile) files on/off flag 
useSingleCpuIO  PARM01  FALSE  only master MPI process does I/O (producing global output files) 
the_run_name  PARM05  ' '  string identifying the name of the model ārunā for meta files 
readBinaryPrec  PARM01  32  precision used for reading binary files (32 or 64) 
writeBinaryPrec  PARM01  32  precision used for writing binary files (32 or 64) 
outputTypesInclusive  PARM03  FALSE  allows writing of output files in multiple formats (i.e. pkg/mdsio and pkg/mnc) 
rwSuffixType  PARM03  0  controls the format of the pkg/mdsio binary file āsuffixā
where myTime is model time in seconds 
mdsioLocalDir  PARM05  ' '  if not blank, readwrite output tiled files from/to this directory name (+fourdigit processorrank code) 
Frequency/Amount of OutputĀ¶
The frequency (in seconds) with which output is written to disk needs to be specified. dumpFreq controls the frequency with which the instantaneous state of the model is written. monitorFreq controls the frequency with which monitor output is dumped to the standard output file(s). The frequency of output is referenced to deltaTClock.
Parameter  Group  Default  Description 

dumpFreq  PARM03  0.0  interval to write model state/snapshot data (s) 
dumpInitAndLast  PARM03  TRUE  write out initial and last iteration model state on/off flag 
diagFreq  PARM03  0.0  interval to write additional intermediate (debugging cg2d/3d) output (s) 
monitorFreq  PARM03  lowest of other output *Freq parms  interval to write monitor output (s) 
monitorSelect  PARM03  2 (3 if fluid is water)  select group of monitor variables to output

debugLevel  PARM01  depends on debugMode  level of printing of MITgcm activity messages/statistics (15, higher > more activity messages) 
plotLevel  PARM01  debugLevel  controls printing of field maps (15, higher > more fields) 
Restart/Pickup FilesĀ¶
chkPtFreq and pchkPtFreq control the output frequency of rolling and permanent pickup (a.k.a. checkpoint) files, respectively. These frequencies are referenced to deltaTClock.
Parameter  Group  Default  Description 

pChkPtFreq  PARM03  0.0  permanent restart/pickup checkpoint file write interval ( s ) 
chkPtFreq  PARM03  0.0  rolling restart/pickup checkpoint file write interval ( s ) 
pickupSuff  PARM03  ' '  force run to use pickups (even if nIter0 =0) and read files with this suffix (10 char. max) 
pickupStrictlyMatch  PARM03  TRUE  force pickup (meta) file formats to exactly match (or terminate with error) on/off flag 
writePickupAtEnd  PARM03  FALSE  write a (rolling) pickup file at run completion on/off flag 
usePickupBeforeC54  PARM01  FALSE  initialize run using old pickup format from code prior to checkpoint54a 
startFromPickupAB2  PARM03  FALSE  using AdamsBashforth3, start using AdamsBashforth2 pickup format; requires #define ALLOW_ADAMSBASHFORTH_3 
Parameters Used In Optional PackagesĀ¶
Some optional packages were not written with packagespecific namelist parameters in a data.${pkg}
file;
or for historical and/or other reasons, several packagespecific namelist parameters remain in data
.
CD SchemeĀ¶
(package pkg/cd_code)
If you run at a sufficiently coarse resolution, you might choose to enable the CD scheme for the computation of the Coriolis terms. The variable tauCD, which represents the CD scheme coupling timescale (in seconds) needs to be set.
Parameter  Group  Default  Description 

useCDscheme  PARM01  FALSE  use CD scheme for Coriolis terms on/off flag 
tauCD  PARM03  deltaTMom  CD scheme coupling timescale (s) 
rCD  PARM03  1  deltaTMom/tauCD  CD scheme normalized coupling parameter (nondim.) 
epsAB_CD  PARM03  abEps  AdamsBashforth2 stabilizing weight used in CD scheme 
Automatic DifferentiationĀ¶
(package pkg/autodiff; see Section 7)
Parameter  Group  Default  Description 

nTimeSteps_l2  PARM03  4  number of inner timesteps to execute per timestep 
adjdumpFreq  PARM03  0.0  interval to write model state/snapshot data adjoint run (s) 
adjMonitorFreq  PARM03  0.0  interval to write monitor output adjoint run (s) 
adTapeDir  PARM05  ' '  if not blank, readwrite checkpointing files from/to this directory name 
Execution Environment ParametersĀ¶
If running multithreaded (i.e., using shared memory/OpenMP), you will need to set nTx and/or nTy so that nTx*nTy is the total number of threads (per process).
The parameter useCubedSphereExchange needs to be changed to .TRUE.
if you are using any type of grid composed of interconnected individual faces,
including the cubed sphere topology or a latlon cap grid. See (needs section to be written).
Note that setting flag debugMode to .TRUE.
activates a separate set of debugging print statements than parameter
debugLevel (see Section 3.8.7.3). The latter controls print statements that monitor model activity (such as opening files, etc.), whereas the former
produces a more codingoriented set of print statements (e.g., entering and exiting subroutines, etc.)
Parameter  Group  Default  Description 

useCubedSphereExchange  EEPARMS  FALSE  use cubedsphere topology domain on/off flag 
nTx  EEPARMS  1  number of threads in the \(x\) direction 
nTy  EEPARMS  1  number of threads in the \(y\) direction 
useCoupler  EEPARMS  FALSE  communicate with other model components through a coupler on/off flag 
useSETRLSTK  EEPARMS  FALSE  call C routine to set environment stacksize to āunlimitedā 
useSIGREG  EEPARMS  FALSE  enable signal handler to receive signal to terminate run cleanly on/off flag 
debugMode  EEPARMS  FALSE  print additional debugging messages; also āflushā STDOUT file unit after each print 
printMapIncludesZeros  EEPARMS  FALSE  text map plots of fields should ignore exact zero values on/off flag 
maxLengthPrt1D  EEPARMS  65  maximum number of 1D array elements to print to standard output 
MITgcm Tutorial Example ExperimentsĀ¶
The full MITgcm distribution comes with a set of preconfigured numerical experiments. Some of these example experiments are tests of individual parts of the model code, but many are fully fledged numerical simulations. Full tutorials exist for a few of the examples, and are documented in sections Section 4.1  Section 4.2. The other examples follow the same general structure as the tutorial examples. However, they only include brief instructions in text file README. The examples are located in subdirectories under the directory verification. Each example is briefly described below.
Barotropic Gyre MITgcm ExampleĀ¶
(in directory verification/tutorial_barotropic_gyre/)
This example experiment demonstrates using the MITgcm to simulate a barotropic, windforced, ocean gyre circulation. The experiment is a numerical rendition of the gyre circulation problem described analytically by Stommel in 1948 [Sto48] and Munk in 1950 [Mun50], and numerically in Bryan (1963) [Bry63]. Note this tutorial assumes a basic familiarity with ocean dynamics and geophysical fluid dynamics; readers new to the field may which to consult one of the standard texts on these subjects, such as Vallis (2017) [Val17] or CushmanRoisin and Beckers (2011) [CRB11].
In this experiment the model is configured to represent a rectangular enclosed box of fluid, \(1200 \times 1200\) km in lateral extent. The fluid depth \(D =\) 5 km. The fluid is forced by a zonal wind stress, \(\tau_x\), that varies sinusoidally in the northsouth direction and is constant in time. Topologically the grid is Cartesian and the Coriolis parameter \(f\) is defined according to a midlatitude betaplane equation
where \(y\) is the distance along the ānorthsouthā axis of the simulated domain. For this experiment \(f_{0}\) is set to \(10^{4}\text{s}^{1}\) and \(\beta = 10^{11}\text{s}^{1}\text{m}^{1}\).
The sinusoidal windstress variations are defined according to
where \(L_{y}\) is the lateral domain extent and \(\tau_0\) is set to \(0.1 \text{N m}^{2}\).
Figure 4.1 summarizes the configuration simulated.
Equations SolvedĀ¶
The model is configured in hydrostatic form (the MITgcm default). The implicit free surface form of the pressure equation described in Marshall et al. (1997) [MHPA97] is employed. A horizontal Laplacian operator \(\nabla_{h}^2\) provides viscous dissipation. The windstress momentum input is added to the momentum equation for the āzonal flowā, \(u\). This effectively yields an active set of equations for this configuration as follows:
where \(u\) and \(v\) are the \(x\) and \(y\) components of the flow vector \(\vec{u}\), \(\eta\) is the free surface height, \(A_{h}\) the horizontal Laplacian viscosity, \(\rho_{c}\) is the fluid density, and \(g\) the acceleration due to gravity.
Discrete Numerical ConfigurationĀ¶
The domain is discretized with a uniform grid spacing in the horizontal set to \(\Delta x=\Delta y=20\) km, so that there are sixty grid cells in the \(x\) and \(y\) directions. Vertically the model is configured using a single layer in depth, \(\Delta z\), of 5000 m.
Numerical Stability CriteriaĀ¶
Letās start with our choice for the modelās time step. To minimize the amount of required computational resources, typically one opts for as large a time step as possible while keeping the model solution stable. The advective CourantāFriedrichsāLewy (CFL) condition (see Adcroft 1995 [Adc95]) for an extreme maximum horizontal flow speed is:
The 2 factor on the left is because we have a 2D problem (in contrast with the more familiar 1D canonical stability analysis); the right hand side is 0.5 due to our default use of AdamsBashforth2 (see Section 2.5) rather than the more familiar value of 1 that one would obtain using a forward Euler scheme. In our configuration, letās assume our solution will achieve a maximum \( u  = 1\) ms^{ā1} (in reality, current speeds in our solution will be much smaller). To keep \(\Delta t\) safely below the stability threshold, letās choose \(\Delta t\) = 1200 s (= 20 minutes), which results in \(S_{a}\) = 0.12.
The numerical stability for inertial oscillations using AdamsBashforth2 (Adcroft 1995 [Adc95])
evaluates to 0.12 for our choice of \(\delta t\), which is below the stability threshold.
There are two general rules in choosing a horizontal Laplacian eddy viscosity \(A_{h}\):
 the resulting Munk layer width should be at least as large (preferably, larger) than the lateral grid spacing;
 the viscosity should be sufficiently small that the model is stable for horizontal friction, given the time step.
Letās use this first rule to make our choice for \(A_{h}\), and check this value using the second rule. The theoretical Munk boundary layer width (as defined by the solution zerocrossing, see Pedlosky 1987 [Ped87]) is given by:
For our configuration we will choose to resolve a boundary layer of \(\approx\) 100 km, or roughly across five grid cells, so we set \(A_{h} = 400\) m^{2} s^{ā1} (more precisely, this sets the full width at \(M_{w}\) = 124 km). This choice ensures that the frictional boundary layer is well resolved.
Given our choice of \(\Delta t\), the stability parameter for the horizontal Laplacian friction (Adcroft 1995 [Adc95])
evaluates to 0.0096, which is well below the stability threshold. As in (4.4) the above criteria is for a 2D problem using AdamsBashforth2 time stepping, with the 0.6 value on the right replacing the more familiar 1 that is obtained using a forward Euler scheme.
See Section 2.5 for additional details on AdamsBashforth timestepping and numerical stability criteria.
Code ConfigurationĀ¶
The model configuration for this experiment resides under the directory verification/tutorial_barotropic_gyre/.
The experiment files
 verification/tutorial_barotropic_gyre/code/SIZE.h
 verification/tutorial_barotropic_gyre/input/data
 verification/tutorial_barotropic_gyre/input/data.pkg
 verification/tutorial_barotropic_gyre/input/eedata
 verification/tutorial_barotropic_gyre/input/bathy.bin
 verification/tutorial_barotropic_gyre/input/windx_cosy.bin
contain the code customizations and parameter settings for this experiment. Below we describe these customizations in detail.
Note: MITgcmās defaults are configured to simulate an ocean rather than an atmosphere, with vertical \(z\)coordinates. To model the ocean using pressure coordinates using MITgcm, additional parameter changes are required; see tutorial ocean_in_p. To switch parameters to model an atmosphere, see tutorial Held_Suarez.
File code/SIZE.hĀ¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64  CBOP
C !ROUTINE: SIZE.h
C !INTERFACE:
C include SIZE.h
C !DESCRIPTION: \bv
C *==========================================================*
C  SIZE.h Declare size of underlying computational grid.
C *==========================================================*
C  The design here supports a threedimensional model grid
C  with indices I,J and K. The threedimensional domain
C  is comprised of nPx*nSx blocks (or tiles) of size sNx
C  along the first (leftmost index) axis, nPy*nSy blocks
C  of size sNy along the second axis and one block of size
C  Nr along the vertical (third) axis.
C  Blocks/tiles have overlap regions of size OLx and OLy
C  along the dimensions that are subdivided.
C *==========================================================*
C \ev
C
C Voodoo numbers controlling data layout:
C sNx :: Number of X points in tile.
C sNy :: Number of Y points in tile.
C OLx :: Tile overlap extent in X.
C OLy :: Tile overlap extent in Y.
C nSx :: Number of tiles per process in X.
C nSy :: Number of tiles per process in Y.
C nPx :: Number of processes to use in X.
C nPy :: Number of processes to use in Y.
C Nx :: Number of points in X for the full domain.
C Ny :: Number of points in Y for the full domain.
C Nr :: Number of points in vertical direction.
CEOP
INTEGER sNx
INTEGER sNy
INTEGER OLx
INTEGER OLy
INTEGER nSx
INTEGER nSy
INTEGER nPx
INTEGER nPy
INTEGER Nx
INTEGER Ny
INTEGER Nr
PARAMETER (
& sNx = 62,
& sNy = 62,
& OLx = 2,
& OLy = 2,
& nSx = 1,
& nSy = 1,
& nPx = 1,
& nPy = 1,
& Nx = sNx*nSx*nPx,
& Ny = sNy*nSy*nPy,
& Nr = 1)
C MAX_OLX :: Set to the maximum overlap region size of any array
C MAX_OLY that will be exchanged. Controls the sizing of exch
C routine buffers.
INTEGER MAX_OLX
INTEGER MAX_OLY
PARAMETER ( MAX_OLX = OLx,
& MAX_OLY = OLy )

Here we show a modified model/inc source code file, customizing MITgcmās array sizes to our model domain. This file must be uniquely configured for any model setup; using the MITgcm default model/inc/SIZE.h will in fact cause a compilation error. Note that MITgcmās storage arrays are allocated as static variables (hence their size must be declared in the source code), in contrast to some model codes which declare array sizes dynamically, i.e., through runtime (namelist) parameter settings.
For this first tutorial, our setup and run environment is the most simple possible: we run on a single process (i.e., NOT MPI and NOT multithreaded) using a single model ātileā. For a more complete explanation of the parameter choices to use multiple tiles, see the tutorial Baroclinic Gyre.
These lines set parameters sNx and sNy, the number of grid points in the \(x\) and \(y\) directions, respectively.
45 46
& sNx = 62, & sNy = 62,
These lines set parameters OLx and OLy in the \(x\) and \(y\) directions, respectively. These values are the overlap extent of a model tile, the purpose of which will be explained in later tutorials. Here, we simply specify the required minimum value (2) in both \(x\) and \(y\).
47 48
& OLx = 2, & OLy = 2,
These lines set parameters nSx, nSy, nPx, and nPy, the number of model tiles and the number of processes in the \(x\) and \(y\) directions, respectively. As discussed above, in this tutorial we configure a single model tile on a single process, so these parameters are all set to the value one.
49 50 51 52
& nSx = 1, & nSy = 1, & nPx = 1, & nPy = 1,
This line sets parameter Nr, the number of points in the vertical dimension. Here we use just a single vertical level.
55
& Nr = 1)
Note these lines summarize the horizontal size of the model domain (NOT to be edited).
53 54
& Nx = sNx*nSx*nPx, & Ny = sNy*nSy*nPy,
Further information and examples about how to configure model/inc/SIZE.h are given in Section 6.3.1.
File input/dataĀ¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53  # Model parameters
# Continuous equation parameters
&PARM01
viscAh=4.E2,
f0=1.E4,
beta=1.E11,
rhoNil=1000.,
gBaro=9.81,
rigidLid=.FALSE.,
implicitFreeSurface=.TRUE.,
# momAdvection=.FALSE.,
tempStepping=.FALSE.,
saltStepping=.FALSE.,
&
# Elliptic solver parameters
&PARM02
cg2dTargetResidual=1.E7,
cg2dMaxIters=1000,
&
# Time stepping parameters
&PARM03
nIter0=0,
nTimeSteps=10,
deltaT=1200.0,
pChkptFreq=31104000.0,
chkptFreq=15552000.0,
dumpFreq=15552000.0,
monitorFreq=1200.,
monitorSelect=2,
#for longer run (3.0 yr):
# nTimeSteps=77760,
# monitorFreq=864000.,
&
# Gridding parameters
&PARM04
usingCartesianGrid=.TRUE.,
delX=62*20.E3,
delY=62*20.E3,
xgOrigin=20.E3,
ygOrigin=20.E3,
delR=5000.,
&
# Input datasets
&PARM05
bathyFile='bathy.bin'
zonalWindFile='windx_cosy.bin',
#zonalWindFile='windx_siny.bin',
meridWindFile=,
&

This file, reproduced completely above, specifies the main parameters for the experiment. The parameters that are significant for this configuration (shown with line numbers to left) are as follows.
PARM01  Continuous equation parametersĀ¶
This line sets parameter viscAh, the horizontal Laplacian viscosity, to \(400\) m^{2} s^{ā1}.
4
viscAh=4.E2,
These lines set \(f_0\) and \(\beta\) (the Coriolis parameter f0 and the gradient of the Coriolis parameter beta) for our betaplane to \(1 \times 10^{4}\) s^{ā1} and \(1 \times 10^{11}\) m^{ā1}s^{ā1}, respectively.
5 6
f0=1.E4, beta=1.E11,
This line sets parameter rhoNil, a reference density which will also be used as \(\rho_c\) (parameter rhoConst) in (4.1), to 1000 kg/m^{3}.
7
rhoNil=1000.,
This line sets parameter gBaro, the acceleration due to gravity \(g\) (in the free surface terms in (4.1) and (4.2)), to 9.81 m/s^{2}. This is the MITgcm default value, i.e., the value used if this line were not included in
data
. One might alter this parameter for a reduced gravity model, or to simulate a different planet, for example.8
gBaro=9.81,
These lines set parameters rigidLid and implicitFreeSurface in order to suppress the rigid lid formulation of the surface pressure inverter and activate the implicit free surface formulation.
9 10
rigidLid=.FALSE., implicitFreeSurface=.TRUE.,
This line sets parameter momAdvection to suppress the (nonlinear) momentum of advection terms in the momentum equations. However, note the
#
in column 1: this ācomments outā the line, so using the above data file verbatim will in fact include the momentum advection terms (i.e., MITgcm default for this parameter is TRUE). Weāll explore the linearized solution (i.e., by removing the leading#
) in Section 4.1.5. Note the ability to comment out a line in a namelist file is not part of standard Fortran, but this feature is implemented for all MITgcm namelist files.11
# momAdvection=.FALSE.,
These lines set parameters tempStepping and saltStepping to suppress MITgcmās forward time integration of temperature and salt in the tracer equations, as these prognostic variables are not relevant for the model solution in this configuration. By default, MITgcm solves equations governing these two (active) tracers; later tutorials will demonstrate how additional passive tracers can be included in the solution. The advantage of NOT solving the temperature and salinity equations is to eliminate many unnecessary computations. In most typical configurations however, one will want the model to compute a solution for \(T\) and \(S\), which typically comprises the majority of MITgcmās processing time.
12 13
tempStepping=.FALSE., saltStepping=.FALSE.,
PARM02  Elliptic solver parametersĀ¶
The first line sets the tolerance (parameter cg2dTargetResidual) that the 2D conjugate gradient solver, the iterative method used in the pressure method algorithm, will use to test for convergence. The second line sets parameter cg2dMaxIters, the maximum number of iterations. The solver will iterate until the residual falls below this target value (here, set to \(1 \times 10^{7}\)) or until this maximum number of solver iterations is reached (here, set to a maximum 1000 iterations). Typically, the solver will converge in far fewer than 1000 iterations, but it does not hurt to allow for a large number. The chosen value for the target residual happens to be the MITgcm default, and will serve well in most model configurations.
18 19
cg2dTargetResidual=1.E7, cg2dMaxIters=1000,
PARM03  Time stepping parametersĀ¶
This line sets the starting (integer) iteration number for the run. Here we set the value to zero, which starts the model from a new, initialized state. If nIter0 is nonzero, the model would require appropriate pickup files (i.e., restart files) in order to continue integration of an existing run.
24
nIter0=0,
This line sets parameter nTimeSteps, the (integer) number of timesteps the model will integrate forward. Below, we have set this to integrate for just 10 time steps, for MITgcm automated testing purposes (Section 5.5). To integrate the solution to near steady state, uncomment the line a few lines further down where we set the value to 77760 time steps. When you make this change, be sure to also comment out the line that sets monitorFreq (see below).
25
nTimeSteps=10,
This line sets parameter deltaT, the timestep used in stepping forward the model, to 1200 seconds. In combination with the larger value of nTimeSteps mentioned above, we have effectively set the model to integrate forward for \(77760 \times 1200 \text{ s} = 3.0\) years (based on 360day years), long enough for the solution to approach equilibrium.
26
deltaT=1200.0,
These lines control the frequency at which restart (a.k.a. pickup) files are dumped by MITgcm. Here the value of pChkptFreq is set to 31,104,000 seconds (=1.0 years) of model time; this controls the frequency of āpermanentā checkpoint pickup files. With permanent files, the modelās iteration number is part of the file name (as the filename āsuffixā; see Section 4.1.4.2) in order to save it as a labelled, permanent, pickup state. The value of ChkptFreq is set to 15,552,000 seconds (=0.5 years); the pickup files written at this frequency but will NOT include the iteration number in the filename, instead toggling between
ckptA
andckptB
in the filename, and thus these files will be overwritten with new data every 2 \(\times\) 15,552,000 seconds. Temporary checkpoint files can be written more frequently without requiring additional disk space, for example to peruse (or rerun) the model state prior to an instability, or restart following a computer crash, etc. Either type of checkpoint file can be used to restart the model.27 28
pChkptFreq=31104000.0, chkptFreq=15552000.0,
This line sets parameter dumpFreq, frequency of writing model state snapshot diagnostics (of relevance in this setup: variables \(u\), \(v\), and \(\eta\)). Here, we opt for a snapshot of model state every 15,552,000 seconds (=0.5 years), or after every 12960 time steps of integration.
29
dumpFreq=15552000.0,
These lines are set to dump monitor output (see Section 9.4) every 1200 seconds (i.e., every time step) to standard output. While this monitor frequency is needed for MITgcm automated testing, this is too much output for our tutorial run. Comment out this line and uncomment the line where monitorFreq is set to 864,000 seconds, i.e., output every 10 days. Parameter monitorSelect is set to 2 here to reduce output of nonapplicable quantities for this simple example.
30 31
monitorFreq=1200., monitorSelect=2,
PARM04  Gridding parametersĀ¶
This line sets parameter usingCartesianGrid, which specifies that the simulation will use a Cartesian coordinate system.
39
usingCartesianGrid=.TRUE.,
These lines set the horizontal grid spacing of the model grid, as vectors delX and delY (i.e., \(\Delta x\) and \(\Delta y\) respectively). This syntax indicates that we specify 62 values in both the \(x\) and \(y\) directions, which matches the domain size as specified in SIZE.h. Grid spacing is set to \(20 \times 10^{3}\) m (=20 km).
40 41
delX=62*20.E3, delY=62*20.E3,
The cartesian grid default origin is (0,0) so here we set the origin with parameters xgOrigin and ygOrigin to (20000,20000), accounting for the bordering solid wall. The centers of the grid boxes will thus be at 10 km, 10 km, 30 km, 50 km, ā¦, in both \(x\) and \(y\) directions.
42 43
xgOrigin=20.E3, ygOrigin=20.E3,
This line sets parameter delR, the vertical grid spacing in the \(z\)coordinate (i.e., \(\Delta z\)), to 5000 m.
44
delR=5000.,
PARM05  Input datasetsĀ¶
This line sets parameter bathyFile, the name of the bathymetry file. See Section 4.1.3.5 for information about the file format.
49
bathyFile='bathy.bin'
These lines specify the names of the files from which the surface wind stress is read. There is a separate file for the \(x\)direction (zonalWindFile) and the \(y\)direction (meridWindFile). Note, here we have left the latter parameter blank, as there is no meridional wind stress forcing in our example.
50 51 52
zonalWindFile='windx_cosy.bin', #zonalWindFile='windx_siny.bin', meridWindFile=,
File input/data.pkgĀ¶
1 2 3  # Packages
&PACKAGES
&

This file does not set any namelist parameters, yet is necessary to run ā only standard packages (i.e., those compiled in MITgcm by default) are required for this setup, so no other customization is necessary. We will demonstrate how to include additional packages in other tutorial experiments.
File input/eedataĀ¶
1 2 3 4 5 6 7 8 9 10 11  # Example "eedata" file
# Lines beginning "#" are comments
# nTx :: No. threads per process in X
# nTy :: No. threads per process in Y
# debugMode :: print debug msg (sequence of S/R calls)
&EEPARMS
nTx=1,
nTy=1,
&
# Note: Some systems use & as the namelist terminator (as shown here).
# Other systems use a / character.

This file uses standard default values (i.e., MITgcm default is singlethreaded) and does not contain customizations for this experiment.
File input/bathy.bin
Ā¶
This file is a 2D(\(x,y\)) map of bottom bathymetry, specified as the \(z\)coordinate of the solid bottom boundary. Here, the value is set to 5000 m everywhere except along the N, S, E, and W edges of the array, where the value is set to 0 (i.e., ālandā). The domain in MITgcm is assumed doubly periodic (i.e., periodic in both \(x\) and \(y\)directions), so boundary walls are necessary to set up our enclosed box domain. The points are ordered from low to high coordinates in both axes (varying fastest in \(x\)), as a raw binary stream of data that is enumerated in the same way as standard MITgcm 2D horizontal arrays. By default, this file is assumed to contain 32bit (single precision) binary numbers. The matlab program verification/tutorial_barotropic_gyre/input/gendata.m was used to generate this bathymetry file.
File input/windx_cosy.bin
Ā¶
Similar to file input/bathy.bin
, this file is a 2D(\(x,y\))
map of \(\tau_{x}\) wind stress values, formatted in the same manner.
The units are Nm^{ā2}. Although \(\tau_{x}\) is only a function of \(y\) in this experiment,
this file must still define a complete 2D map in order
to be compatible with the standard code for loading forcing fields
in MITgcm. The matlab program verification/tutorial_barotropic_gyre/input/gendata.m
was used to generate this wind stress file. To run the barotropic jet variation of this tutorial example (see Figure 4.4),
you will in fact need to run this
matlab program to generate the file input/windx_siny.bin
.
Building and running the modelĀ¶
To configure and compile the code (following the procedure described in Section 3.5.1):
cd build
../../../tools/genmake2 mods ../code Ā«Ā«of my_platform_optionFileĀ»Ā»
make depend
make
cd ..
To run the model (following the procedure in Section 3.6):
cd run
ln s ../input/* .
ln s ../build/mitgcmuv .
./mitgcmuv > output.txt
Standard outputĀ¶
Your runās standard output file should be similar to verification/tutorial_barotropic_gyre/results/output.txt. The standard output is essentially a log file of the model run. The following information is included (in rough order):
 startup information including MITgcm checkpoint release number and other execution environment information, and a list of activated packages (including all default packages, as well as optional packages).
 the text from all
data.*
and other critical files (in our example here, eedata, SIZE.h, data, and data.pkg).  information about the grid and bathymetry, including dumps of all grid variables (only if Cartesian or spherical polar coordinates used, as is the case here).
 all runtime parameter choices used by the model, including all model defaults as well as userspecified parameters.
 monitor statistics at regular intervals (as specified by parameter monitorFreq in data. See Section 9.4).
 output from the 2D conjugate gradient solver. More specifically, statistics from the righthand side of the elliptic equation ā for our linear freesurface, see eq. (2.15) ā are dumped for every model time step. If the model solution blows up, these statistics will increase to infinity, so one can see exactly when the problem occurred (i.e., to aid in debugging). Additional solver variables, such as number of iterations and residual, are included with the monitor statistics.
 a summary of endofrun execution information, including user, wall and systemtime elapsed during execution, and tile communication statistics. These statistics are provided for the overall run, and also broken down by time spent in various subroutines.
Different setups using nonstandard packages and/or different parameter choices will include additional or different output as part of the standard output. It is also possible to select more or less output by changing the parameter debugLevel in data; see (missing doc for pkg debug).
STDERR.0000
 if errors (or warnings) occurred during the run, helpful warning and/or error message(s) would appear in this file.
Other output filesĀ¶
In addition to raw binary data files with .data
extension, each binary file has a corresponding .meta
file. These plaintext files include
information about the array size, precision (i.e., float32
or float64
), and if relevant, time information and/or
a list of these fields included in the binary file. The .meta
files are used by MITgcm utils when binary data are read.
The following output files are generated:
Grid Data: see Section 2.11 for definitions and description of the Arakawa Cgrid staggering of model variables.
XC
,YC
 grid cell center point locationsXG
,YG
 locations of grid cell verticesRC
,RF
 vertical cell center and cell faces positionsDXC
,DYC
 grid cell center point separations (Figure 2.6 b)DXG
,DYG
 separation of grid cell vertices (Figure 2.6 a)DRC
,DRF
 separation of vertical cell centers and faces, respectivelyRAC
,RAS
,RAW
,RAZ
 areas of the grid ātracer cellsā, āsouthern cellsā, āwestern cellsā and āvorticity cellsā, respectively (Figure 2.6)hFacC
,hFacS
,hFacW
 fractions of the grid cell in the vertical which are āopenā as defined in the center and on the southern and western boundaries, respectively. These variables effectively contain the configuration bathymetric (or topographic) information.Depth
 bathymetry depths
All these files contain 2D(\(x,y\)) data except RC
, RF
, DRC
, DRF
, which are 1D(\(z\)),
and hFacC
, hFacS
, hFacW
, which contain 3D(\(x,y,z\)) data. Units for the grid files depends on oneās choice of model grid;
here, they are all in given in meters (or \(\text{m}^2\) for areas).
All the 2D grid data files contain .001.001
in their filename, e.g., DXC.001.001.data
ā this is the tile number in .XXX.YYY
format.
Here, we have just a single tile in both x and y, so both tile numbers are 001
. Using multiple tiles, the default is that the local tile grid information
would be output separately for each tile (as an example, see the baroclinic gyre tutorial,
which is set up using multiple tiles), producing multiple files for each 2D grid variable.
State Variable Snapshot Data:
Eta.0000000000.001.001.data, Eta.0000000000.001.001.meta
 this is
a binary data snapshot of model dynamic variable
etaN (the freesurface height) and its meta file, respectively.
Note the tile number is included in the filename, as is the iteration number 0000000000
, which is simply the time step
(the iteration number here is referred to as the āsuffixā in
MITgcm parlance; there are options to change this suffix to something other than iteration number).
In other words, this is a dump of the freesurface height from the initialized state,
iteration 0; if you load up this data file,
you will see it is all zeroes. More interesting is the freesurface
height after some time steps have occurred. Snapshots are written according
to our parameter choice dumpFreq, here set to 15,552,000 seconds, which is every 12960 time steps.
We will examine the model solutions in Section 4.1.5.
The freesurface height is a 2D(\(x,y\)) field.
Snapshot files exist for other prognostic model variables, in particular
filenames starting with U
(uVel),
V
(uVel), T
(theta), and S
(salt);
given our setup, these latter two fields
remain uniform in space and time, thus not very interesting until we
explore a baroclinic gyre setup in tutorial_baroclinic_gyre.
These are all 3D(\(x,y,z\)) fields. The format for the file names is similar
to the freesurface height files. Also dumped are snapshots
of diagnosed vertical velocity W
(wVel) (note that in nonhydrostatic
simulations, W
is a fully prognostic model variable).
Checkpoint Files:
The following pickup files are generated:
pickup.0000025920.001.001.data
,pickup.0000025920.001.001.meta
, etc.  written at frequency set by pChkptFreqpickup.ckptA.001.001.data
,pickup.ckptA.001.001.meta
,pickup.ckptB.001.001.data
,pickup.ckptB.001.001.meta
 written at frequency set by ChkptFreq
Other Model Output Data: For completeness, here we list the remaining default output files produced by MITgcm (despite being not particularly informative for this simple setup).
RhoRef.data, RhoRef.meta
 this is a 1D(\(z\)) array of reference density. Here we have a single level and have not specified an equation of state relation, thus
the file simply contains our prescribed value rhoNil.
PHrefC.data, PHrefC.meta, PHrefF.data, PHrefF.meta
 these are 1D(\(z\)) arrays containing reference
hydrostatic āpressure potentialā \(\phi = p/\rho_c\) (see Section 1.3.6),
computed at the (vertical grid) cell centers and cell faces, respectively.
In our setup here, PHrefC
is simply \(\frac{\rho_c*g*D/2}{\rho_c}\),
i.e., computed at the midpoint of our single vertical cell.
PH
, PHL
files  these are a 3D(\(x,y,z\)) field of hydrostatic
\(\phiā\) (including freesurface contribution) at cell centers
and a 2D(\(x,y\)) field of ocean bottom \(\phiā\), respectively, as a function of time.
To obtain full \(\phi(t)\) values, PHrefC
should be added to PH
,
and PHrefF
(\(z\) =bottom) should be added to PHL
.
Model SolutionĀ¶
After running the model for 77,760 time steps (3.0 years), the solution is near equilibrium. Given an approximate timescale of one month for barotropic Rossby waves to cross our model domain, one might expect the solution to require several years to achieve an equilibrium state. The model solution of freesurface height \(\eta\) (proportional to streamfunction) at \(t=\) 3.0 years is shown in Figure 4.2. For further details on this solution, particularly examining the effect of the nonlinear terms with increasing Reynolds number, the reader is referred to Pedlosky (1987) [Ped87] section 5.11.
Using matlab for example, visualizing output using the utils/matlab/rdmds.m utility to load the
binary data in Eta.0000077760.001.001.data
is as simple as:
addpath ../../../utils/matlab/
XC=rdmds('XC'); YC=rdmds('YC');
Eta=rdmds('Eta',77760);
contourf(XC/1000,YC/1000,Eta,[.04:.01:.04]); colorbar;
colormap((flipud(hot))); set(gca,'XLim',[0 1200]); set(gca,'YLim',[0 1200])
or using python (you will need to copy utils/python/MITgcmutils/MITgcmutils/mds.py to your run directory before proceeding):
import mds
import matplotlib.pyplot as plt
XC = mds.rdmds('XC'); YC = mds.rdmds('YC')
Eta = mds.rdmds('Eta', 77760)
plt.contourf(XC, YC, Eta, np.linspace(0.02, 0.05,8), cmap='hot_r')
plt.colorbar(); plt.show()
Letās simplify the example by considering the linear problem where we neglect the advection of momentum terms.
In other words, replace \(\frac{Du}{Dt}\) and \(\frac{Dv}{Dt}\) with
\(\frac{\partial u}{\partial t}\) and \(\frac{\partial v}{\partial t}\), respectively, in in (4.1) and (4.2).
To do so, we uncomment (i.e., remove the leading #
) in the
line # momAdvection=.FALSE.,
in file data
and rerun the model. Any existing output files will be overwritten.
For the linearized equations, the Munk layer (equilibrium) analytical solution is given by:
where \(\delta_m= ( \frac { A_{h} }{ \beta } )^{\frac{1}{3}}\). Figure 4.3 displays the MITgcm output after switching off momentum advection vs. the analytical solution to the linearized equations. Success!
Finally, letās examine one additional simulation where we change the cosine profile of wind stress forcing to a sine profile.
First, run the matlab script verification/tutorial_barotropic_gyre/input/gendata.m to generate the alternate sine
profile wind stress, and place a copy in your run directory. Then,
in file data,
replace the line zonalWindFile='windx_cosy.binā,
with zonalWindFile='windx_siny.binā,
.
The free surface solution given this forcing is shown in Figure 4.4. Two āhalf gyresā are separated by a strong jet. Weāll look more at the solution to this ābarotropic jetā setup in later tutorial examples.
A Rotating Tank in Cylindrical CoordinatesĀ¶
(in directory: verification/rotating_tank/)
This example configuration demonstrates using the MITgcm to simulate a laboratory demonstration using a differentially heated rotating annulus of water. The simulation is configured for a laboratory scale on a \(3^{\circ}\times1\mathrm{cm}\) cyclindrical grid with twentynine vertical levels of 0.5cm each. This is a typical laboratory setup for illustration principles of GFD, as well as for a laboratory data assimilation project.
example illustration from GFD lab here
Equations SolvedĀ¶
Discrete Numerical ConfigurationĀ¶
The domain is discretised with a uniform cylindrical grid spacing in the horizontal set to \(\Delta a=1\)Delta phi=3^{circ}`, so that there are 120 grid cells in the azimuthal direction and thirtyone grid cells in the radial, representing a tank 62cm in diameter. The bathymetry file sets the depth=0 in the nine lowest radial rows to represent the central of the annulus. Vertically the model is configured with twentynine layers of uniform 0.5cm thickness.
something about heat flux
Code ConfigurationĀ¶
The model configuration for this experiment resides under the
directory verification/rotatingi_tank/
. The experiment files
input/data
input/data.pkg
input/eedata
input/bathyPol.bin
input/thetaPol.bin
code/CPP\_EEOPTIONS.h
code/CPP\_OPTIONS.h
code/SIZE.h
contain the code customizations and parameter settings for this experiments. Below we describe the customizations to these files associated with this experiment.
File input/dataĀ¶
This file, reproduced completely below, specifies the main parameters for the experiment. The parameters that are significant for this configuration are
 Lines 910,
 viscAh=5.0E6,
 viscAz=5.0E6,
These lines set the Laplacian friction coefficient in the horizontal and vertical, respectively. Note that they are several orders of magnitude smaller than the other examples due to the small scale of this example.
 Lines 1316,
 diffKhT=2.5E6,
 diffKzT=2.5E6,
 diffKhS=1.0E6,
 diffKzS=1.0E6,
These lines set horizontal and vertical diffusion coefficients for temperature and salinity. Similarly to the friction coefficients, the values are a couple of orders of magnitude less than most configurations.
 Line 17, f0=0.5, this line sets the coriolis term, and represents a tank spinning at about 2.4 rpm.
 Lines 23 and 24
 rigidLid=.TRUE.,
 implicitFreeSurface=.FALSE.,
These lines activate the rigid lid formulation of the surface pressure inverter and suppress the implicit free surface form of the pressure inverter.
 Line 40,
 nIter=0,
This line indicates that the experiment should start from $t=0$ and implicitly suppresses searching for checkpoint files associated with restarting an numerical integration from a previously saved state. Instead, the file thetaPol.bin will be loaded to initialized the temperature fields as indicated below, and other variables will be initialized to their defaults.
 Line 43,
 deltaT=0.1,
This line sets the integration timestep to $0.1s$. This is an unsually small value among the examples due to the small physical scale of the experiment. Using the ensemble Kalman filter to produce input fields can necessitate even shorter timesteps.
 Line 56,
 usingCylindricalGrid=.TRUE.,
This line requests that the simulation be performed in a cylindrical coordinate system.
 Line 57,
 dXspacing=3,
This line sets the azimuthal grid spacing between each $x$coordinate line in the discrete grid. The syntax indicates that the discrete grid should be comprised of $120$ grid lines each separated by $3^{circ}$.
 Line 58,
 dYspacing=0.01,
This line sets the radial cylindrical grid spacing between each \(a\)coordinate line in the discrete grid to \(1cm\).
 Line 59,
 delZ=29*0.005,
This line sets the vertical grid spacing between each of 29 zcoordinate lines in the discrete grid to $0.005m$ ($5$~mm).
 Line 64,
 bathyFile=ābathyPol.binā,
This line specifies the name of the file from which the domain ābathymetryā (tank depth) is read. This file is a twodimensional (\(a,\phi\)) map of depths. This file is assumed to contain 64bit binary numbers giving the depth of the model at each grid cell, ordered with the $phi$ coordinate varying fastest. The points are ordered from low coordinate to high coordinate for both axes. The units and orientation of the depths in this file are the same as used in the MITgcm code. In this experiment, a depth of $0m$ indicates an area outside of the tank and a depth f \(0.145m\) indicates the tank itself.
 Line 65,
 hydrogThetaFile=āthetaPol.binā,
This line specifies the name of the file from which the initial values of temperature are read. This file is a threedimensional (\(x,y,z\)) map and is enumerated and formatted in the same manner as the bathymetry file.
 Lines 66 and 67
 tCylIn = 0
 tCylOut = 20
These line specify the temperatures in degrees Celsius of the interior and exterior walls of the tank ā typically taken to be icewater on the inside and room temperature on the outside.
Other lines in the file input/data are standard values that are described in the MITgcm Getting Started and MITgcm Parameters notes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67  # ====================
#  Model parameters 
# ====================
#
# Continuous equation parameters
&PARM01
tRef=29*20.0,
sRef=29*35.0,
viscAh=5.0E6,
viscAz=5.0E6,
no_slip_sides=.FALSE.,
no_slip_bottom=.FALSE.,
diffKhT=2.5E6,
diffKzT=2.5E6,
diffKhS=1.0E6,
diffKzS=1.0E6,
f0=0.5,
eosType='LINEAR',
sBeta =0.,
gravity=9.81,
rhoConst=1000.0,
rhoNil=1000.0,
#heatCapacity_Cp=3900.0,
rigidLid=.TRUE.,
implicitFreeSurface=.FALSE.,
nonHydrostatic=.TRUE.,
readBinaryPrec=32,
&
# Elliptic solver parameters
&PARM02
cg2dMaxIters=1000,
cg2dTargetResidual=1.E7,
cg3dMaxIters=10,
cg3dTargetResidual=1.E9,
&
# Time stepping parameters
&PARM03
nIter0=0,
nTimeSteps=20,
#nTimeSteps=36000000,
deltaT=0.1,
abEps=0.1,
pChkptFreq=2.0,
#chkptFreq=2.0,
dumpFreq=2.0,
monitorSelect=2,
monitorFreq=0.1,
&
# Gridding parameters
&PARM04
usingCylindricalGrid=.TRUE.,
dXspacing=3.,
dYspacing=0.01,
delZ=29*0.005,
ygOrigin=0.07,
&
# Input datasets
&PARM05
hydrogThetaFile='thetaPolR.bin',
bathyFile='bathyPolR.bin',
tCylIn = 0.,
tCylOut = 20.,
&

File input/data.pkgĀ¶
This file uses standard default values and does not contain customizations for this experiment.
File input/eedataĀ¶
This file uses standard default values and does not contain customizations for this experiment.
File input/thetaPol.binĀ¶
The {it input/thetaPol.bin} file specifies a threedimensional ($x,y,z$) map of initial values of $theta$ in degrees Celsius. This particular experiment is set to random values x around 20C to provide initial perturbations.
File input/bathyPol.binĀ¶
The {it input/bathyPol.bin} file specifies a twodimensional ($x,y$) map of depth values. For this experiment values are either $0m$ or {bf delZ}m, corresponding respectively to outside or inside of the tank. The file contains a raw binary stream of data that is enumerated in the same way as standard MITgcm twodimensional, horizontal arrays.
File code/SIZE.hĀ¶
Two lines are customized in this file for the current experiment
 Line 39,  sNx=120,
this line sets the lateral domain extent in grid points for the axis aligned with the xcoordinate.
 Line 40,  sNy=31,
this line sets the lateral domain extent in grid points for the axis aligned with the ycoordinate.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64  CBOP
C !ROUTINE: SIZE.h
C !INTERFACE:
C include SIZE.h
C !DESCRIPTION: \bv
C *==========================================================*
C  SIZE.h Declare size of underlying computational grid.
C *==========================================================*
C  The design here supports a threedimensional model grid
C  with indices I,J and K. The threedimensional domain
C  is comprised of nPx*nSx blocks (or tiles) of size sNx
C  along the first (leftmost index) axis, nPy*nSy blocks
C  of size sNy along the second axis and one block of size
C  Nr along the vertical (third) axis.
C  Blocks/tiles have overlap regions of size OLx and OLy
C  along the dimensions that are subdivided.
C *==========================================================*
C \ev
C
C Voodoo numbers controlling data layout:
C sNx :: Number of X points in tile.
C sNy :: Number of Y points in tile.
C OLx :: Tile overlap extent in X.
C OLy :: Tile overlap extent in Y.
C nSx :: Number of tiles per process in X.
C nSy :: Number of tiles per process in Y.
C nPx :: Number of processes to use in X.
C nPy :: Number of processes to use in Y.
C Nx :: Number of points in X for the full domain.
C Ny :: Number of points in Y for the full domain.
C Nr :: Number of points in vertical direction.
CEOP
INTEGER sNx
INTEGER sNy
INTEGER OLx
INTEGER OLy
INTEGER nSx
INTEGER nSy
INTEGER nPx
INTEGER nPy
INTEGER Nx
INTEGER Ny
INTEGER Nr
PARAMETER (
& sNx = 30,
& sNy = 23,
& OLx = 3,
& OLy = 3,
& nSx = 4,
& nSy = 1,
& nPx = 1,
& nPy = 1,
& Nx = sNx*nSx*nPx,
& Ny = sNy*nSy*nPy,
& Nr = 29)
C MAX_OLX :: Set to the maximum overlap region size of any array
C MAX_OLY that will be exchanged. Controls the sizing of exch
C routine buffers.
INTEGER MAX_OLX
INTEGER MAX_OLY
PARAMETER ( MAX_OLX = OLx,
& MAX_OLY = OLy )

File code/CPP_OPTIONS.hĀ¶
This file uses standard default values and does not contain customizations for this experiment.
File code/CPP_EEOPTIONS.hĀ¶
This file uses standard default values and does not contain customizations for this experiment.
Contributing to the MITgcmĀ¶
The MITgcm is an open source project that relies on the participation of its users, and we welcome contributions. This chapter sets out how you can contribute to the MITgcm.
Bugs and feature requestsĀ¶
If you think youāve found a bug, the first thing to check that youāre using the latest version of the model. If the bug is still in the latest version, then think about how you might fix it and file a ticket in the GitHub issue tracker. Please include as much detail as possible. At a minimum your ticket should include:
 what the bug does;
 the location of the bug: file name and line number(s); and
 any suggestions you have for how it might be fixed.
To request a new feature, or guidance on how to implement it yourself, please open a ticket with the following details:
 a clear explanation of what the feature will do; and
 a summary of the equations to be solved.
Using Git and GithubĀ¶
To contribute to the source code of the model you will need to fork the repository and place a pull request on GitHub. The two following sections describe this process in different levels of detail. If you are unfamiliar with git, you may wish to skip the quickstart guide and use the detailed instructions. All contributions to the source code are expected to conform with the Coding style guide. Contributions to the manual should follow the same procedure and conform with Section 5.6.
Quickstart GuideĀ¶
1. Fork the project on GitHub (using the fork button).
2. Create a local clone (we strongly suggest keeping a separate repository for development work):
% git clone https://github.com/Ā«GITHUB_USERNAMEĀ»/MITgcm.git
3. Move into your local clone directory (cd MITgcm) and and set up a remote that points to the original:
% git remote add upstream https://github.com/MITgcm/MITgcm.git
4. Make a new branch from upstream/master
(name it something
appropriate, such as ābugfixā or ānewfeatureā etc.) and make edits on this branch:
% git fetch upstream
% git checkout b Ā«YOUR_NEWBRANCH_NAMEĀ» upstream/master
5. When edits are done, do all git addās and git commitās. In the commit message,
make a succinct (<70 char) summary of your changes. If you need more space to
describe your changes, you can leave a blank line and type a longer description,
or break your commit into multiple smaller commits. Reference any outstanding
issues addressed using the syntax #Ā«ISSUE_NUMBERĀ»
.
6. Push the edited branch to the origin remote (i.e. your fork) on GitHub:
% git push u origin Ā«YOUR_NEWBRANCH_NAMEĀ»
7. On GitHub, go to your fork and hit the compare and pull request (PR) button, provide the requested information about your PR (in particular, a nontrivial change to the model requires a suggested addition to doc/tagindex) and wait for the MITgcm head developers to review your proposed changes. In general the MITgcm code reviewers try to respond to a new PR within a week. The reviewers may accept the PR as is, or may request edits and changes. Occasionally the review team will reject changes that are not sufficiently aligned with and do not fit with the code structure. The review team is always happy to discuss their decisions, but wants to avoid people investing extensive effort in code that has a fundamental design flaw. The current review team is JeanMichel Campin, Ed Doddridge, Chris Hill, Oliver Jahn, and Jeff Scott.
If you want to update your code branch before submitting a PR (or any point in development), follow the recipe below. It will ensure that your GitHub repo stays up to date with the main repository. Note again that your edits should always be to your development branch, not the master branch.
% git checkout master
% git pull upstream master
% git push origin master
% git checkout Ā«YOUR_NEWBRANCH_NAMEĀ»
% git merge master
If you prefer, you can rebase rather than merge in the final step above; just be careful regarding your rebase syntax!
Detailed guide for those less familiar with Git and GitHubĀ¶
What is Git? Git is a version control software tool used to help coordinate work among the many MITgcm model contributors. Version control is a management system to track changes in code over time, not only facilitating ongoing changes to code, but also as a means to check differences and/or obtain code from any past time in the project history. Without such a tool, keeping track of bug fixes and new features submitted by the global network of MITgcm contributors would be virtually impossible. If you are familiar with the older form of version control used by the MITgcm (CVS), there are many similarities, but we now take advantage of the modern capabilities offered by Git.
Git itself is open source linux software (typically included with any
new linux installation, check with your sysadmin if it seems to be
missing) that is necessary for tracking changes in files, etc. through
your local computerās terminal session. All Gitrelated terminal commands
are of the form git Ā«argumentsĀ»
. Important functions include syncing
or updating your code library, adding files to a collection of files
with edits, and commands to āfinalizeā these changes for sending back to
the MITgcm maintainers. There are numerous other Git commandline
tools to help along the way (see man pages via man git
).
The most common git commands are:
git clone
download (clone) a repository to your local machinegit status
obtain information about the local git repositorygit diff
highlight differences between the current version of a file and the version from the most recent commitgit add
stage a file, or changes to a file, so that they are ready forgit commit
git commit
create a commit. A commit is a snapshot of the repository with an associated message that describes the changes.
What is GitHub then? GitHub is a website that has three major purposes: 1) Code Viewer: through your browser, you can view all source code and all changes to such over time; 2) āPull Requestsā: facilitates the process whereby code developers submit changes to the primary MITgcm maintainers; 3) the āCloudā: GitHub functions as a cloud server to store different copies of the code. The utility of #1 is fairly obvious. For #2 and #3, without GitHub, one might envision making a big tarball of edited files and emailing the maintainers for inclusion in the main repository. Instead, GitHub effectively does something like this for you in a much more elegant way. Note unlike using (linux terminal command) git, GitHub commands are NOT typed in a terminal, but are typically invoked by hitting a button on the web interface, or clicking on a webpage link etc. To contribute edits to MITgcm, you need to obtain a github account. Itās free; do this first if you donāt have one already.
Before you start working with git, make sure you identify yourself. From your terminal, type:
% git config global user.email Ā«your_email@example.eduĀ»
% git config global user.name Ā«āJohn DoeāĀ»
(note the required quotes around your name). You should also personalize your profile associated with your GitHub account.
There are many online tutorials to using Git and GitHub (see for example https://akrabat.com/thebeginnersguidetocontributingtoagithubproject ); here, we are just communicating the basics necessary to submit code changes to the MITgcm. Spending some time learning the more advanced features of Git will likely pay off in the long run, and not just for MITgcm contributions, as you are likely to encounter it in all sorts of different projects.
To better understand this process, Figure 5.1 shows a conceptual map of the Git setup. Note three copies of the code: the main MITgcm repository sourcecode āupstreamā (i.e., owned by the MITgcm maintainers) in the GitHub cloud, a copy of the repository āoriginā owned by you, also residing in the GitHub cloud, and a local copy on your personal computer or compute cluster (where you intend to compile and run). The Git and GitHub commands to create this setup are explained more fully below.
One other aspect of Git that requires some explanation to the uninitiated: your
local linux copy of the code repository can contain different ābranchesā,
each branch being a different copy of the code repository (this can occur
in all gitaware directories). When you switch branches, basic unix commands
such as ls
or cat
will show a different set of files specific to
current branch. In other words, Git interacts with your local file system
so that edits or newly created files only appear in the current branch, i.e.,
such changes do not appear in any other branches. So if you swore you
made some changes to a particular file, and now it appears those changes
have vanished, first check which branch you are on (git status
is a
useful command here), all is probably not lost. NOTE: for a file to be āassignedā to a specific Git branch,
Git must first be āmade awareā of the file, which occurs after a git add
and git commit
(see below).
Prior to this, the file will appear in the current folder independently, i.e., regardless of which git branch you are on.
A detailed explanation of steps for contributing MITgcm repository edits:
1. On GitHub, create a local copy of the repository in your GitHub cloud user space: from the main repository (https://github.com/MITgcm/MITgcm) hit the Fork button. As mentioned, your GitHub copy āoriginā is necessary to streamline the collaborative development process ā you need to create a place for your edits in the GitHub cloud, for developers to peruse.
2. Download the code onto your local computer using the git clone command. Even if you previously downloaded the code through a āgitawareā method (i.e., a git clone command, see Section 3.2.1), we STRONGLY SUGGEST you download a fresh repository, to a separate disk location, for your development work (keeping your research work separate). Type:
% git clone https://github.com/Ā«GITHUB_USERNAMEĀ»/MITgcm.git
from your terminal (technically, here you are copying the forked āoriginā version from the cloud, not the āupstreamā version, but these will be identical at this point).
3. Move into the local clone directory on your computer:
% cd MITgcm
We need to set up a remote that points to the main repository:
% git remote add upstream https://github.com/MITgcm/MITgcm.git
This means that we now have two āremotesā of the project. A remote is
just a pointer to a repository not on your computer, i.e., in the GitHub
cloud, one pointing to your GitHub user space (āoriginā), and this new
remote pointing to the original (āupstreamā). You can read and write
into your āoriginā version (since it belongs to you, in the cloud),
but not into the āupstreamā version. This command just sets up this
remote, which is needed in step #4 ā no actual file manipulation
is done at this point. If in doubt, the command git remote v
will list what remotes have been set up.
4. Next make a new branch.
% git fetch upstream
% git checkout b Ā«YOUR_NEWBRANCH_NAMEĀ» upstream/master
You will make edits on this new branch, to keep these new edits completely
separate from all files on the master branch. The first command
git fetch upstream
makes sure your new branch is the latest code
from the main repository; as such, you can redo step 4 at any time to
start additional, separate development projects (on a separate, new branch).
Note that this second command above not only creates this new branch,
from the upstream/master
branch, it also switches you onto this newly
created branch. Naming the branch something descriptive like ānewfeatureā
or ābugfixā (preferably, be even more descriptive) is helpful.
5. Doing stuff! This usually comes in one of three flavors:
 edit the relevant file(s) and/or create new files. Refer to Coding style guide for details on expected documentation standards and code style requirements. Of course, changes should be thoroughly tested to ensure they compile and run successfully!
 type
git add Ā«FILENAME1Ā» Ā«FILENAME2Ā» ...
to stage the file(s) ready for a commit command (note both existing and brand new files need to be added). āStageā effectively means to notify Git of the the list of files you plan to ācommitā for changes into the version tracking system. Note you can change other files and NOT have them sent to model developers; only staged files will be sent. You can repeat thisgit add
command as many times as you like and it will continue to augment the list of files.git diff
andgit status
are useful commands to see what you have done so far. use
git commit
to commit the files. This is the first step in bundling a collection of files together to be sent off to the MITgcm maintainers. When you enter this command, an editor window will pop up. On the top line, type a succinct (<70 character) summary of what these changes accomplished. If your commit is nontrivial and additional explanation is required, leave a blank line and then type a longer description of why the action in this commit was appropriate etc. It is good practice to link with known issues using the syntax#ISSUE_NUMBER
in either the summary line or detailed comment. Note that all the changes do not have to be handled in a single commit (i.e. you can git add some files, do a commit, than continue anew by adding different files, do another commit etc.); thegit commit
command itself does not (yet) submit anything to maintainers. if you are fixing a more involved bug or adding a new feature, such that many changes are required, it is preferable to break your contribution into multiple commits (each documented separately) rather than submitting one massive commit; each commit should encompass a single conceptual change to the code base, regardless of how many files it touches. This will allow the MITgcm maintainers to more easily understand your proposed changes and will expedite the review process.
When your changes are tested and documented, continue on to step #6, but read all of step #6 and #7 before proceeding; you might want to do an optional ābring my development branch up to dateā sequence of steps before step #6.
6. Now we āpushā our modified branch with committed changes onto the origin remote in the GitHub cloud. This effectively updates your GitHub cloud copy of the MITgcm repo to reflect the wonderful changes you are contributing.
% git push u origin Ā«YOUR_NEWBRANCH_NAMEĀ»
Some time might elapse during step #5, as you make and test your edits, during which continuing development occurs in the main MITgcm repository. In contrast with some models that opt for static, major releases, the MITgcm is in a constant state of improvement and development. It is very possible that some of your edits occur to files that have also been modified by others. Your local clone however will not know anything about any changes that may have occurred to the MITgcm repo in the cloud, which may cause an issue in step #7 below, when one of three things will occur:
 the files you have modified in your development have NOT been modified in the main repo during this elapsed time, thus git will have no conflicts in trying to update (i.e. merge) your changes into the main repo.
 during the elapsed time, the files you have modified have also been edited/updated in the main repo, but you edited different places in these files than those edits to the main repo, such that git is smart enough to be able to merge these edits without conflict.
 during the elapsed time, the files you have modified have also been edited/updated in the main repo, but git is not smart enough to know how to deal with this conflict (it will notify you of this problem during step #7).
One option is to NOT attempt to bring your development code branch up to date, instead simply proceed with steps #6 and #7 and let the maintainers assess and resolve any conflict(s), should such occur (there is a checkbox āAllow edits by maintainersā that is checked by default when you do step #7). If very little time elapsed during step #5, such conflict is less likely. However, if step #5 takes on the order of months, we do suggest you follow this recipe below to update the code and merge yourself. And/or during the development process, you might have reasons to bring the latest changes in the main repo into your development branch, and thus might opt to follow these same steps.
Development branch code update recipe:
% git checkout master
% git pull upstream master
% git push origin master
% git checkout Ā«YOUR_NEWBRANCH_NAMEĀ»
% git merge master
This first command switches you from your development branch to the master branch. The second command above will synchronize
your local master branch with the main MITgcm repository master branch (i.e. āpullā any new changes that might have occurred
in the upstream repository into your local clone). Note you should not have made any changes to your cloneās master branch;
in other words, prior to the pull, master should be a stagnant copy of the code from the day you performed step #1 above.
The git push
command does the opposite of pull, so in the third step you are synchronizing your GitHub cloud copy (āoriginā)
master branch to your local cloneās master branch (which you just updated). Then, switch back to your development branch via
the second git checkout
command. Finally, the last command will merge any changes into your development branch.
If conflicts occur that git cannot resolve, git will provide you a list of the problematic file names, and in these files,
areas of conflict will be demarcated. You will need to edit these files at these problem spots (while removing gitās demarcation text),
then do a git add Ā«FILENAMEĀ»
for each of these files, followed by a final git commit
to finish off the merger.
Some additional git diff
commands to help sort out file changes, in case you want to assess the scope of development changes,
are as follows. git diff master upstream/master
will show you all differences between your local master branch and the main
MITgcm repo, i.e., so you can peruse what parallel MITgcm changes have occurred while you were doing your development (this assumes
you have not yet updated your cloneās master branch).
You can check for differences on individual files via git diff master upstream/master Ā«FILENAMEĀ»
.
If you want to see all differences in files you have modified during your development, the command
is git diff master
. Similarly, to see a combined list of both your changes and those occurring to the main repo, git diff upstream/master
.
Aside comment: if you are familiar with git, you might realize there is an alternate way to merge, using the ārebaseā syntax. If you know what you are doing, feel free to use this command instead of our suggested merge command above.
7. Finally create a āpull requestā (a.k.a. āPRā; in other words, you are requesting that the maintainers pull your changes into the main code repository). In GitHub, go to the fork of the project that you made (https://github.com/Ā«GITHUB_USERNAMEĀ»/MITgcm.git). There is a button for āCompare and Pullā in your newly created branch. Click the button! Now you can add a final succinct summary description of what youāve done in your commit(s), flag up any issues, and respond to the remaining questions on the PR template form. If you have made nontrivial changes to the code or documentation, we will note this in the MITgcm change log, doc/tagindex. Please suggest how to note your changes in doc/tagindex; we will not accept the PR if this field is left blank. The maintainers will now be notified and be able to peruse your changes! In general, the maintainers will try to respond to a new PR within a week. While the PR remains open, you can go back to step #5 and make additional edits, git adds, git commits, and then redo step #6; such changes will be added to the PR (and maintainers renotified), no need to redo step #7.
Your pull request remains open until either the maintainers fully accept and
merge your code changes into the main repository, or decide to reject your changes.
Occasionally, the review team will reject changes that are not
sufficiently aligned with and do not fit with the code structure;
the review team is always happy to discuss their decisions, but wants to
avoid people investing extensive additional effort in code that has a fundamental design flaw.
But much more likely than outright rejection, you will instead be asked to respond to feedback,
modify your code changes in some way, and/or clean up your code to better satisfy our style requirements, etc.,
and the pull request will remain open.
In some cases, the maintainers might take initiative to make some changes to your pull request
(such changes can then be incorporated back into your local branch simply by typing git pull
from your branch), but
more typically you will be asked to undertake the majority of the necessary changes.
It is possible for other users (besides the maintainers) to examine or even download your pull request; see Reviewing pull requests.
The current review team is JeanMichel Campin, Ed Doddridge, Chris Hill, Oliver Jahn, and Jeff Scott.
Coding style guideĀ¶
Detailed instructions or link to be added.
Creating MITgcm packagesĀ¶
Optional parts of code are separated from the MITgcm core driver code and organized into packages. The packaging structure provides a mechanism for maintaining suites of code, specific to particular classes of problem, in a way that is cleanly separated from the generic fluid dynamical engine. An overview of available MITgcm packages is presented in Section 8, as illustrated in Figure 8.1. An overview of how to include and use MITgcm packages in your setup is presented in Section 8.1.1, with specific details on using existing packages spread throughout Section 8, Section 9, and Section 10. This subsection includes information necessary to create your own package for use with MITgcm.
The MITgcm packaging structure is described
below using generic package names ${pkg}
.
A concrete examples of a package is the code
for implementing GM/Redi mixing: this code uses
the package names ${PKG} = GMREDI
, ${pkg} = gmredi
, and ${Pkg} = gmRedi
.
Package structureĀ¶
 Compiletime state: Given that each package is allowed to be compiled or not
(e.g., all
${pkg}
listed inpackages.conf
are compiled, see Section 8.1.1.1), genmake2 keeps track of each packageās compilestate in PACKAGES_CONFIG.h with CPP optionALLOW_${PKG}
being defined (#define
) or not (#undef
). Therefore, in the MITgcm core code (or code from other included packages), calls to packagespecific subroutines and packagespecific header file#include
statements must be protected within#ifdef ALLOW_${PKG}
ā¦ ā¦#endif /* ALLOW_${PKG} */
(see below) to ensure that the model compiles when this ${pkg} is not compiled.  Runtime state: The core driver part of the model can check
for a runtime on/off switch of individual package(s)
through the Fortran logical flag
use${Pkg}
. The information is loaded from a global package setup file calleddata.pkg
. Note ause${Pkg}
flag is NOT used within the packagelocal subroutine code (i.e.,${pkg}_Ā«DO_SOMETHINGĀ».F
package source code).  Each package gets its runtime configuration
parameters from a file named
data.${pkg}
. Package runtime configuration options are imported into a common block held in a header file called${PKG}.h
. Note in some packages, the header file${PKG}.h
is split into${PKG}_PARAMS.h
, which contains the package parameters, and${PKG}_VARS.h
for the field arrays. The${PKG}.h
header file(s) can be imported by other packages to check dependencies and requirements from other packages (see Section 5.4.2).
In order for a packageās runtime state use${Pkg}
to be set to true (i.e., āonā),
the code build must have its compiletime state ALLOW_${PKG}
defined (i.e., āincludedā),
else mitgcmuv will terminate (cleanly) during initialization. A packageās runtime state
is not permitted to change during a model run.
Every call to a package routine from outside the package requires a check on BOTH compiletime and runtime states:
#include "PACKAGES_CONFIG.h"
#include "CPP_OPTIONS.h"
.
.
#ifdef ALLOW_${PKG}
# include "${PKG}_PARAMS.h"
#endif
.
.
.
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) THEN
.
.
CALL ${PKG}_DO_SOMETHING(...)
.
ENDIF
#endif
Within an individual package, the header file ${PKG}_OPTIONS.h
is used to set CPP flags specific to that package. This header file should include
PACKAGES_CONFIG.h
and CPP_OPTIONS.h, as shown in this example:
#ifndef ${PKG}_OPTIONS_H
#define ${PKG}_OPTIONS_H
#include "PACKAGES_CONFIG.h"
#include "CPP_OPTIONS.h"
#ifdef ALLOW_${PKG}
.
.
.
#define ${PKG}_SOME_PKG_SPECIFIC_CPP_OPTION
.
.
.
#endif /* ALLOW_${PKG} */
#endif /* ${PKG}_OPTIONS_H */
See for example GMREDI_OPTIONS.h.
Package boot sequenceĀ¶
All packages follow a required ābootā sequence outlined here:
S/R PACKAGES_BOOT()
S/R PACKAGES_READPARMS()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_READPARMS( retCode )
#endif
S/R PACKAGES_INIT_FIXED()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_INIT_FIXED( retCode )
#endif
S/R PACKAGES_CHECK()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_CHECK( retCode )
#else
IF ( use${Pkg} ) CALL PACKAGES_CHECK_ERROR('${PKG}')
#endif
S/R PACKAGES_INIT_VARIABLES()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_INIT_VARIA( )
#endif
 PACKAGES_BOOT()
 determines the logical state of all
use${Pkg}
variables, as defined in the filedata.pkg
.
 ${PKG}_READPARMS()
 is responsible for reading
in the package parameters file
data.${pkg}
and storing the package parameters in${PKG}.h
(or in${PKG}_PARAMS.h
). ${PKG}_READPARMS is called in S/R packages_readparms.F, which in turn is called from S/R initialise_fixed.F.
 ${PKG}_INIT_FIXED()
 is responsible for completing the internal setup of a package, including adding any packagespecific
variables available for output in pkg/diagnostics (done in S/R ${PKG}_DIAGNOSTICS_INIT).
${PKG}_INIT_FIXED is called in S/R packages_init_fixed.F,
which in turn is called from S/R initialise_fixed.F.
Note: some packages instead use
CALL ${PKG}_INITIALISE
(or the old formCALL ${PKG}_INIT
).
 ${PKG}_CHECK()
 is responsible for validating
basic package setup and interpackage dependencies.
${PKG}_CHECK can also import parameters from other packages that it may
need to check; this is accomplished through header files
${PKG}.h
. (It is assumed that parameters owned by other packages will not be reset during ${PKG}_CHECK !!!) ${PKG}_CHECK is called in S/R packages_check.F, which in turn is called from S/R initialise_fixed.F.
 ${PKG}_INIT_VARIA()
 is responsible for initialization of all package variables, called after the core model state has been completely initialized but before the core model timestepping starts. This routine calls ${PKG}_READ_PICKUP, where any package variables required to restart the model will be read from a pickup file. ${PKG}_INIT_VARIA is called in packages_init_variables.F, which in turn is called from S/R initialise_varia.F. Note: the name ${PKG}_INIT_VARIA is not yet standardized across all packages; one can find other S/R names such as ${PKG}_INI_VARS or ${PKG}_INIT_VARIABLES or ${PKG}_INIT.
Package S/R callsĀ¶
Calls to package subroutines within the core code timestepping loop can vary. Below we show an example of calls to do calculations, generate output and dump the package state (for pickup):
S/R DO_OCEANIC_PHYS()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_DO_SOMETHING( )
#endif
S/R DO_THE_MODEL_IO()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_OUTPUT( )
#endif
S/R PACKAGES_WRITE_PICKUP()
#ifdef ALLOW_${PKG}
IF ( use${Pkg} ) CALL ${PKG}_WRITE_PICKUP( )
#endif
 ${PKG}_DO_SOMETHING()
 refers to any local package source code file, which may be called from any model/src routine (or, from any subroutine in another package). An specific example would be the S/R call gmredi_calc_tensor.F from within the core S/R model/src/do_oceanic_phys.F.
 ${PKG}_OUTPUT()
 is responsible for writing timeaverage fields to output files (although the cumulating step is done within other package subroutines). May also call other output routines (e.g., CALL ${PKG}_MONITOR) and write snapshot fields that are held in common blocks. Other temporary fields are directly dumped to file where they are available. Note that pkg/diagnostics output of ${PKG} variables is generated in pkg/diagnostics subroutines. ${PKG}_OUTPUT() is called in S/R do_the_model_io.F NOTE: 1) the S/R ${PKG}_DIAGS is used in some packages but is being replaced by ${PKG}_OUTPUT to avoid confusion with pkg/diagnostics functionality. 2) the output part is not yet in a standard form.
 ${PKG}_WRITE_PICKUP()
 is responsible for writing a package pickup file, used in packages where such is necessary for a restart. ${PKG}_WRITE_PICKUP is called in packages_write_pickup.F which in turn is called from the_model_main.F.
Note: In general, subroutines in one package (pkgA) that only contains code which is connected to a 2nd package (pkgB) will be named pkgA_pkgB_something.F (e.g., gmredi_diagnostics_init.F).
Package āmypackageāĀ¶
In order to simply creating the infrastructure required for a new package, we have created pkg/mypackage as essentially an existing package (i.e., all package variables defined, proper boot sequence, output generated) that does not do anything. Thus, we suggest you start with this āblankā packageās code infrastructure and add your new package functionality to it, perusing the existing mypackage routines and editing as necessary, rather than creating a new package from scratch.
MITgcm code testing protocolsĀ¶
verification directory includes many examples intended for regression testing (some of which are tutorial experiments presented in detail in Section 4). Each one of these testexperiment directories contains āknowngoodā standard output files (see Section 5.5.2.1) along with all the input (including both code and data files) required for their recalculation. Also included in verification is the shell script testreport to perform regression tests.
Testexperiment directory contentĀ¶
Each testexperiment directory (Ā«TESTDIRĀ», see verification for
the full list of choices) contains several standard subdirectories and files which
testreport recognizes and uses when running a regression test.
The directories and files that testreport
uses are different for a forward test and an adjoint test (testreport adm
, see Section 5.5.2) and
some testexperiments are setup for only one type of regression test
whereas others allow both types of tests (forward and adjoint).
Also some testexperiments allow, using the same MITgcm executable, multiple tests using
different parameters and input files, with a primary input setup (e.g., input/
or input_ad/
) and corresponding
results (e.g., results/output.txt
or results/output_adm.txt
) and with one or several secondary inputs
(e.g., input.Ā«OTHERĀ»/
or input_ad.Ā«OTHERĀ»/
) and corresponding results (e.g., results/output.Ā«OTHERĀ».txt
or results/output_adm.Ā«OTHERĀ».txt
).
 directory TESTDIR/code/
Contains the testexperiment specific source code (i.e., files that have been modified from the standard MITgcm repository version) used to build the MITgcm executable (
mitgcmuv
) for forwardtest (usinggenmake2 mods=../code
).It can also contain specific source files with the suffix
_mpi
to be used in place of the corresponding file (without suffix) for an MPI test (see Section 5.5.2). The presence or absence ofSIZE.h_mpi
determines whether or not an MPI test on this testexperiment is performed or skipped. Note that the originalcode/SIZE.h_mpi
is not directly used as SIZE.h to build an MPIexecutable; instead, a local copybuild/SIZE.h.mpi
is derived fromcode/SIZE.h_mpi
by adjusting the number of processors (nPx, nPy) according to Ā«NUMBER_OF_PROCSĀ» (see Section 5.5.2,testreport MPI
); then it is linked to SIZE.h (ln s SIZE.h.mpi SIZE.h
) before building the MPIexecutable. directory TESTDIR/code_ad/
 Contains the testexperiment specific source code used to build the MITgcm executable (
mitgcmuv_ad
) for adjointtest (usinggenmake2 mods=../code_ad
). It can also contain specific source files with the suffix_mpi
(see above).  directory Ā«TESTDIRĀ»/build/
 Directory where testreport
will build the MITgcm executable for forward and adjoint tests. It is initially empty except in some cases
will contain an experiment specific
genmake_local
file (see Section 3.5.2).  directory TESTDIR/input/
Contains the input and parameter files used to run the primary forward test of this testexperiment.
It can also contain specific parameter files with the suffix
.mpi
to be used in place of the corresponding file (without suffix) for MPI tests, or with suffix.mth
to be used for multithreaded tests (see Section 5.5.2). The presence or absence ofeedata.mth
determines whether or not a multithreaded test on this testexperiment is performed or skipped, respectively.To save disk space and reduce downloading time, multiple copies of the same input file are avoided by using a shell script
prepare_run
. When such a script is found inTESTDIR/input/
, testreport runs this script in directoryTESTDIR/run/
after linking all the input files fromTESTDIR/input/
. directory TESTDIR/input_ad/
 Contains the input and parameter files used to run the primary adjoint test of this testexperiment. It can also
contain specific parameter files with the suffix
.mpi
and shell scriptprepare_run
as described above.  directory TESTDIR/input.Ā«OTHERĀ»/
Contains the input and parameter files used to run the secondary OTHER forward test of this testexperiment. It can also contain specific parameter files with suffix
.mpi
or.mth
and shell scriptprepare_run
(see above).The presence or absence the file
eedata.mth
determines whether or not a secondary multithreaded test on this testexperiment is performed or skipped. directory TESTDIR/input_ad.Ā«OTHERĀ»/
 Contains the input and parameter files used to run the secondary OTHER adjoint test of this testexperiment. It
can also contain specific parameter files with the suffix
.mpi
and shell scriptprepare_run
(see above).  directory Ā«TESTDIRĀ»/results/
 Contains reference standard output used for test comparison.
results/output.txt
andresults/output_adm.txt
, respectively, correspond to primary forward and adjoint test run on the reference platform (currently baudelaire.mit.edu) on one processor (no MPI, single thread) using the reference compiler (currently the GNU Fortran compiler gfortran). The presence of these output files determines whether or not testreport is testing or skipping this testexperiment. Reference standard output for secondary tests (results/output.Ā«OTHERĀ».txt
orresults/output_adm.Ā«OTHERĀ».txt
) are also expected here.  directory TESTDIR/run/
Initially empty directory where testreport will run the MITgcm executable for primary forward and adjoint tests.
Symbolic links (using command
ln s
) are made for input and parameter files (from../input/
or from../input_ad/
) and for MITgcm executable (from../build/
) before the run proceeds. The sequence of links (functionlinkdata
within shell script testreport) for a forward test is: link and rename or remove links to special files with suffix
.mpi
or.mth
from../input/
 link files from ../input/
 execute
../input/prepare_run
(if it exists)
The sequence for an adjoint test is similar, with
../input_ad/
replacing../input/
. link and rename or remove links to special files with suffix
 directory TESTDIR/tr_run.Ā«OTHERĀ»/
Directory created by testreport to run the MITgcm executable for secondary āOTHERā forward or adjoint tests.
The sequence of links for a forward secondary test is:
 link and rename or remove links to special files with suffix
.mpi
or.mth
from../input.OTHER/
 link files from
../input.OTHER/
 execute
../input.OTHER/prepare_run
(if it exists)  link files from
../input/
 execute
../input/prepare_run
(if it exists)
The sequence for an adjoint test is similar, with
../input_ad.OTHER/
and../input_ad/
replacing../input.OTHER/
and../input/
. link and rename or remove links to special files with suffix
The testreport utilityĀ¶
The shell script testreport, which was written to work with genmake2, can be used to build different versions of MITgcm code, run the various examples, and compare the output. On some systems, the testreport script can be run with a command line as simple as:
% cd verification
% ./testreport optfile ../tools/build_options/linux_amd64_gfortran
The testreport script accepts a number of commandline options which can be listed using the
help
option. The most important ones are:
ieee
(default) /fast
 If allowed by the compiler (as defined in the specified optfile), use IEEE arithmetic (
genmake2 ieee
). In contrast,fast
uses the optfile default for compiler flags. devel
 Use optfile development flags (assumes specified in optfile).
optfile Ā«/PATH/FILENAMEĀ»
(oroptfile āĀ«/PATH/F1Ā» Ā«/PATH/F2Ā» ...ā
) This specifies a list of āoptions filesā that will be passed to genmake2. If multiple options files are used (for example, to test different compilers or different sets of options for the same compiler), then each options file will be used with each of the test directories.
tdir Ā«TESTDIRĀ»
(ortdir āĀ«TDIR1Ā» Ā«TDIR2Ā» ...ā
) This option specifies the test directory or list of test directories that should be used.
Each of these entries should exactly match (note: they are case sensitive!) the names of directories in
verification. If this option is omitted, then all directories that are
properly formatted (that is, containing an input subdirectory and a
results/output.txt
file) will be used. skipdir Ā«TESTDIRĀ»
(orskipdir āĀ«TDIR1Ā» Ā«TDIR2Ā» ...ā
) This option specifies a test directory or list of test directories to skip. The default is to test ALL directories in verification.
MPI Ā«NUMBER_OF_PROCSĀ»
(ormpi
)If the necessary file
Ā«TESTDIRĀ»/code/SIZE.h_mpi
exists, then use it (and allTESTDIR/code/*_mpi
files) for an MPIenabled run. The optionMPI
followed by the maximum number of processors enables to build and run each testexperiment using different numbers of MPI processors (specific number chosen by: multiple of nPx*nPy fromĀ«TESTDIRĀ»/code/SIZE.h_mpi
and not larger than Ā«NUMBER_OF_PROCSĀ»). The short option (mpi
) can only be used to build and run on 2 MPI processors (equivalent toMPI 2
).Note that the use of MPI typically requires a special command option (see ācommandā below) to invoke the MPI executable.
command=āĀ«SOME COMMANDS TO RUNĀ»ā
For some tests, particularly MPI runs, a specific command might be needed to run the executable. This option allows a more general command (or shell script) to be invoked.
The default here is for Ā«SOME COMMANDS TO RUNĀ» to be replaced by
mpirun np TR_NPROC mitgcmuv
. If on your system you require something other thanmpirun
, you will need to use the option and specify your computerās syntax. Because the number of MPI processors varies according to each testexperiment, the keyword TR_NPROC will be replaced by its effective value, the actual number of MPI processors needed to run the current testexperiment.mth
 Compile with
genmake2 omp
and run with multiple threads (usingeedata.mth
). adm
 Compile and test the adjoint suite of verification runs using TAF.
clean
 Clean out all files/progress from any previously executed testreport runs.
match Ā«NUMBERĀ»
 Set matching criteria to Ā«NUMBERĀ» of significant digits (default is 10 digits).
Additional testreport options are available
to pass options to genmake2 (called during testreport execution)
as well as additional options to skip specific steps of the
testreport shell script. See
testreport help
for a detailed list.
In the verification/ directory, the testreport script will create an output
directory Ā«tr_NAME_DATE_NĀ», with your computer hostname substituted for
NAME, the current date for DATE, followed by a suffix number N to distinguish
from previous testreport
output directories. Unless you specify otherwise using the tdir
or skipdir
options described above,
all subdirectories (i.e., TESTDIR experiments) in verification will be tested.
testreport writes progress to the screen (stdout) and
reports into the Ā«tr_NAME_DATE_N/TESTDIRĀ» subdirectories as it runs. In particular,
one can find, in each TESTDIR subdirectory, a
summary.txt
file in addition to log and/or error file(s) (depending how the run failed, if this occurred).
summary.txt
contains information about the run and a comparison of the current
output with āreference outputā (see below for information on how this reference output is generated).
The test comparison involves several output model variables. By default, for a forward test, these are the 2D
solver initial residual cg2d_init_res
and 3D state variables
(T, S, U, V) from pkg/monitor output; by default
for an adjoint test, the costfunction and gradientcheck. However, some testexperiments
use some packagespecific variables from pkg/monitor according to the file
Ā«TESTDIRĀ»/input[_ad][.Ā«OTHERĀ»]/tr_checklist
specification. Note that at this time,
the only variables that are compared by testreport
are those dumped in standard output via pkg/monitor, not output produced
by pkg/diagnostics. Monitor output produced from ALL run time steps are compared
to assess significant digit match; the worst match is reported.
At the end of the testing process, a composite
summary.txt
file is generated in the top Ā«tr_NAME_DATE_NĀ» directory as a compact, combined version of the summary.txt
files located in all TESTDIR subdirectories
(a slightly more condensed version of this information is also written to file tr_out.txt
in the top verification/ directory;
note this file is overwritten upon subsequent testreport runs).
Figure 5.2 shows an excerpt from the composite summary.txt
, created by running the full testreport suite (in the example here, on a linux cluster, using gfortran):
The four columns on the left are build/run results (successful=Y, unsuccessful=N). Explanation of these columns is as follows:
 Gen2: did genmake2 build the makefile for this experiment without error?
 Dpnd: did the
make depend
for this experiment complete without error? Make: did the
make
successfully generate amitgcmuv
executable for this experiment? Run: did execution of this experiment startup and complete successfully?
The next sets of columns shows the number of significant digits matched from the monitor
output ācg2dā, āminā, āmaxā, āmeanā, and ās dā (standard deviation) for variables T, S, U, and V (see column headings), as compared with the reference output.
NOTE: these column heading labels are for the default list of variables, even if different variables are specified in a tr_checklist
file
(for reference, the list of actual variables tested for a specific TESTDIR experiment is output near the end of the file summary.txt
appearing in the specific TESTDIR experiment directory).
For some experiments, additional variables are tested, as shown in āPTR 01ā, āPTR 02ā sets of columns;
testreport will detect if tracers are active
in a given experiment and check digit match on their concentration values.
A match to nearfull machine precision is 1516 digits; this generally will occur when a similar type of computer,
similar operating system, and similar version of Fortran compiler are used for the test. Otherwise, different roundoff can occur,
and due to the chaotic nature of ocean and climate models, fewer digits (typically, 1013 digits) are matched. A match of 22 digits generally is
due to output being exactly 0.0. In some experiments, some variables may not be used or meaningful, which causes the ā0ā and ā4ā match results
in several of the adjustment experiments above.
While the significant digit match for many variables is tested and displayed in summary.txt
,
only one of these is used to assess pass/fail (output to the right of the match test results) ā the number bracketed by >
and <
.
For example, see above for experiment advect_cs the pass/fail test occurs on variable āT: s dā
(i.e., standard deviation of potential temperature), the first variable in the list specified in
verification/advect_cs/input/tr_checklist. By default (i.e., if no file tr_checklist
is present),
pass/fail is assessed on the cg2d monitor output.
See the testreport script for a list of
permissible variables to test and a guide to their abbreviations. See tr_checklist
files in the input subdirectories of several TESTDIR
experiments (e.g., verification/advect_xz/input/tr_checklist) for examples of syntax (note, a +
after a variable in a tr_checklist file
is shorthand to compare the
mean, minimum, maximum, and standard deviation for the variable).
Reference OutputĀ¶
Reference output is currently generated using the linux server baudelaire.mit.edu
which employs an Intel Xeon Westmere processor running Fedora Core 13.
For each verification experiment in the MITgcm repository, this reference output is stored in the file Ā«TESTDIRĀ»/results/output.txt
,
which is the standard output generated by running testreport
(using a single process) on baudelaire.mit.edu
using the gfortran (GNU Fortran) compiler version 4.4.5.
Using a different gfortran version (or a different Fortran compiler entirely), and/or running with MPI, a different operating system, or a different processor (cpu) type will generally result in output that differs to machine precision. The greater the number of such differences between your platform and this reference platform, typically the fewer digits of matching output precision.
The do_tst_2+2 utilityĀ¶
The shell script tools/do_tst_2+2 can be used to check the accuracy of the restart procedure. For each experiment that has been run through testreport, do_tst_2+2 executes three additional short runs using the tools/tst2+2 script. The first run makes use of the pickup files output from the run executed by testreport to restart and run for four time steps, writing pickup files upon completion. The second run is similar except only two time steps are executed, writing pickup files. The third run restarts from the end of the second run, executing two additional time steps, writing pickup files upon completion. In order to successfully pass do_tst_2+2, not only must all three runs execute and complete successfully, but the pickups generated at the end the first run must be identical to the pickup files from the end of the third run. Note that a prerequisite to running do_tst_2+2 is running testreport, both to build the executables used by do_tst_2+2, and to generate the pickup files from which do_tst_2+2 begins execution.
The tools/do_tst_2+2 script should be called from the verification/ directory, e.g.:
% cd verification
% ../tools/do_tst_2+2
The do_tst_2+2 script accepts a number of commandline options which can be listed using the
help
option. The most important ones are:
t Ā«TESTDIRĀ»
 Similar to testreport option
tdir
, specifies the test directory or list of test directories that should be used. If omitted, the test is attempted in all subdirectories. skd Ā«TESTDIRĀ»
 Similar to testreport option
skipdir
, specifies a test directory or list of test directories to skip. mpi
 Run the tests using MPI; requires the prerequisite testreport
run to have been executed with the
mpi
orMPI Ā«NUMBER_OF_PROCSĀ»
flag. No argument is necessary, as the do_tst_2+2 script will determine the correct number of processes to use for your executable. clean
 Clean up any output generated from the do_tst_2+2. This step is necessary if one wants to do additional testreport runs from these directories.
Upon completion, do_tst_2+2 will generate a file tst_2+2_out.txt
in the verification/ directory which summarizes the results.
The top half of the file includes information from the composite summary.txt
file from the prerequisite testreport run.
In the bottom half, new results from each verification experiment are given:
each line starts with four Y/N indicators indicating if pickups from
the testreport run were available,
and whether runs 1, 2 and 3, completely successfully, respectively,
followed by a pass or fail from the output pickup file comparison test, followed by the TESTDIR experiment name.
In each Ā«TESTDIRĀ»/run
subdirectory
do_tst_2+2 also creates a log file tst_2+2_out.log
which contains additional information.
During do_tst_2+2 execution a separate directory of summary information,
including log files for all failed tests, is created in an output directory Ā«rs_NAME_DATE_NĀ»
similar to the syntax for the testreport output directory name.
Note however this directory is deleted by default
upon do_tst_2+2 completion, but can be saved
by adding the do_tst_2+2 command line option a NONE
.
Daily Testing of MITgcmĀ¶
On a daily basis, MITgcm runs a full suite of testreport
(i.e., forward and adjoint runs, single process, singlethreaded and mpi) on an array of different
clusters, running using different operating systems, testing several different Fortran compilers.
The reference machine baudelaire.mit.edu
is one of such daily test machines.
When changes in output occur from previous runs, even if as minor as changes
in numeric output to machine precision, MITgcm maintainers are automatically notified.
Links to summary results from the daily testing are posted at http://mitgcm.org/public/testing.html.
Required Testing for MITgcm Code ContributorsĀ¶
Using testreport to check your new codeĀ¶
Before submitting your pull request for approval, if you have made any changes to MITgcm code, however trivial, you MUST complete the following:
 Run testreport (on all experiments) on an unmodified master branch of MITgcm. We suggest using the
devel
option and gfortran (typically installed in most linux environments) although neither is strictly necessary for this test. Depending how different your platform is from our reference machine setup, typically most tests will pass but some match tests may fail; it is possible one or more experiments might not even build or run successfully. But even if there are multiple experiment fails or unsuccessful builds or runs, do not despair, the purpose at this stage is simply to generate a reference report on your local platform using the master code. It may take one or more hours for testreport to complete.  Save a copy of this summary output from running testreport on the mastrer branch: from the verification
directory, type
cp tr_out.txt tr_out_master.txt
. The filetr_out.txt
is simply a condensed version of the compositesummary.txt
file located in the Ā«tr_NAME_DATE_NĀ» directory. Note we are not making this file āgitawareā, as we have no desire to check this into the repo, so we are using an oldfashioned copy to save the output here for later comparison.  Switch to your pull request branch, and repeat the testreport sequence using the same options.
 From the verification directory, type
diff tr_out_master.txt tr_out.txt
which will report any differences in testreport output from the above tests. If no differences occur (other than timestamprelated), see below if you are required to do a do_tst_2+2 test; otherwise, you are clear for submitting your pull request.
Differences might occur due to one or more of the following reasons:
 Your modified code no longer builds properly in one or more experiments. This is likely due to a Fortran syntax error; examine output and log files in the failed experiment TESTDIR to identify and fix the problem.
 The run in the modified code branch terminates due to a numerical exception error. This too requires further investigation into the cause of the error, and a remedy, before the pull request should be submitted.
 You have made changes which require changes to input parameters (e.g., renaming a namelist parameter, changing the units or function of an input parameter, etc.) This by definition is a ābreaking changeā, which must be noted when completing the PR template ā but should not deter you from submitting your PR. Ultimately, you and the maintainers will likely have to make changes to one or more verification experiments, but as a first step we will want to review your PR.
 You have made algorithmic changes which change model output in some or all setups; this too is a ābreaking changeā that should be noted in
the PR template. As usual recourse, if the PR is accepted, the maintainers will regenerate reference output and push to the affected
Ā«TESTDIRĀ»/results/
directories when the PR is merged.
Most typically, running testreport using a single process is a sufficient test. However, any code changes which call MITgcm
routines (such as eesupp/src/global_sum.F) employing lowlevel MPIdirectives
should run testreport with the mpi
option enabled.
Using do_tst_2+2 to check your new codeĀ¶
If you make any kind of algorithmic change to the code, or modify anything related to generating or reading pickup files,
you are also required to also complete a do_tst_2+2. Again, run the test on both the unmodified master branch and your
pull request branch (after you have run testreport on both branches).
Verify that the output tst_2+2_out.txt
file is identical between branches, similar to the above procedure for the file tr_out.txt
.
If the files differ, attempt to identify and fix what is causing the problem.
Automatic testing with TravisCIĀ¶
Once your PR is submitted onto GitHub, the continuous integration service TravisCI runs additional tests on your PR submission. On the āPull requestā tab in GitHub (https://github.com/MITgcm/MITgcm/pulls), find your pull request; initially you will see a yellow circle to the right of your PR title, indicating testing in progress. Eventually this will change to a green checkmark (pass) or a red X (fail). If you get a red X, click the X and then click on āDetailsā to list specifics tests that failed; these can be clicked to produce a screenshot with error messages.
Note that TravisCI builds documentation (both html and latex) in addition to code testing, so if you have
introduced syntax errors into the documentation files,
these will be flagged at this stage. Follow the same procedure as above to identify the error messages so the problem(s) can be fixed. Make any
appropriate edits to your pull request, regit add
and regit commit
any newly modified files, regit push
. Anytime changes are pushed to the PR,
TravisCI will rerun its tests.
The maintainers will not review your PR until all TravisCI tests pass.
Contributing to the manualĀ¶
Whether you are simply correcting typos or describing undocumented packages, we welcome all contributions to the manual. The following information will help you make sure that your contribution is consistent with the style of the MITgcm documentation. (We know that not all of the current documentation follows these guidelines  weāre working on it)
The manual is written in rst format, which is short for ReStructuredText directives. rst offers many wonderful features: it automatically does much of the formatting for you, it is reasonably well documented on the web (e.g., primers available here and here), it can accept raw latex syntax and track equation labelling for you, in addition to numerous other useful features. On the down side however, it can be very fussy about formatting, requiring exact spacing and indenting, and seemingly innocuous things such as blank spaces at ends of lines can wreak havoc. We suggest looking at the existing rst files in the manual to see exactly how something is formatted, along with the syntax guidelines specified in this section, prior to writing and formatting your own manual text.
The manual can be viewed either of two ways: interactively (i.e., webbased), as hosted by readthedocs (https://readthedocs.org/), requiring an html format build, or downloaded as a pdf file. When you have completed your documentation edits, you should double check both versions are to your satisfaction, particularly noting that figure sizing and placement may be rendered differently in the pdf build.
Section headingsĀ¶
 Chapter headings  these are the main headings with integer numbers  underlined with
****
 section headings  headings with number format X.Y  underlined with
====
 Subsection headings  headings with number format X.Y.Z  underlined with

 Subsubsection headings  headings with number format X.Y.Z.A  underlined with
~~~~
 Paragraph headings  headings with no numbers  underlined with
^^^^
N.B. all underlinings should be the same length as the heading. If they are too short an error will be produced.
Internal document referencesĀ¶
rst allows internal referencing of figures, tables, section headings, and
equations, i.e. clickable links that bring the reader to the respective
figure etc. in the manual.
To be referenced, a unique label is required. To reference figures, tables, or section headings by number,
the rst (inline) directive is :numref:`Ā«LABELNAMEĀ»`
. For example,
this syntax would write out Figure XX
on a line (assuming Ā«LABELNAMEĀ» referred to a figure),
and when clicked, would relocate your position
in the manual to figure XX. Section headings can also be referenced
so that the name is written out instead of the section number, instead using this
directive :ref:`Ā«LABELNAMEĀ»`
.
Equation references have a slightly different inline syntax: :eq:`Ā«LABELNAMEĀ»`
will produce a clickable equation number reference, surrounded by parentheses.
For instructions how to assign a label to tables and figures, see
below. To label a section heading,
labels go above the section heading they refer to, with the format .. _Ā«LABELNAMEĀ»:
.
Note the necessary leading underscore. You can also place a clickable
link to any spot in the text (e.g., midsection),
using this same syntax to make the label, using the syntax
:ref:`Ā«SOME TEXT TO CLICK ONĀ» <Ā«LABELNAMEĀ»>`
for the link.
CitationsĀ¶
In the text, references should be given using the standard āAuthor(s) (Year)ā shorthand followed by a link
to the full reference in the manual bibliography. This link is accomplished using the syntax
:cite:`Ā«BIB_REFERENCEĀ»`
; this will produce clickable text, usually some variation on the authorsā initials or names, surrounded by brackets.
Full references are specified in the file doc/manual_references.bib
using standard BibTeX format.
Even if unfamiliar with BibTeX, it is relatively easy
to add a new reference by simply examining other entries. Furthermore, most
publishers provide a means to download BibTex formatted references directly from their website.
Note this file is in approximate alphabetic order by author name.
For all new references added to the manual, please include a DOI or
a URL in addition to journal name, volume and other
standard reference infomation. An example JGR journal article reference is
reproduced below; note the Ā«BIB_REFERENCEĀ» here is ābryan:79ā so the syntax in the rst file format would be āBryan and Lewis (1979) :cite:`bryan:79`
,
which will appear in the manual as Bryan and Lewis (1979) [BL79].
Other embedded linksĀ¶
Hyperlinks: to reference a (clickable) URL, simply enter the full URL.
If you want to have a different,
clickable text link instead of displaying the full URL, the syntax
is `Ā«CLICKABLE TEXTĀ» <Ā«URLĀ»>`_
(the ā<ā and ā>ā are literal characters,
and note the trailing underscore).
For this kind of link, the clickable text has to be unique for each URL. If
you would like to use nonunique text (like āclick hereā), you should use
an āanonymous referenceā with a double trailing underscore:
`Ā«CLICKABLE TEXTĀ» <Ā«URLĀ»>`__
.
File references: to create a link to pull up MITgcm code (or any file in the repo)
in a code browser window, the syntax is :filelink:`Ā«PATH/FILENAMEĀ»`
.
If you want to have a different text link to click on (e.g., say you
didnāt want to display the full path), the syntax is
:filelink:`Ā«CLICKABLE TEXTĀ» <Ā«PATH/FILENAMEĀ»>`
(again, the ā<ā and ā>ā are literal characters). The top
directory here is https://github.com/MITgcm/MITgcm ,
so if for example you wanted to pop open the file
dynamics.F
from the main model source directory, you would specify
model/src/dynamics.F
in place of Ā«PATH/FILENAMEĀ».
Variable references: to create a link to bring up a webpage
displaying all MITgcm repo references to a particular variable
name (for this purpose we are using the LXR Cross Referencer),
the syntax is :varlink:`Ā«NAME_OF_VARIABLEĀ»`
. This will work
on CPP options as well as FORTRAN identifiers (e.g., common block
names, subroutine names).
Symbolic NotationĀ¶
Inline math is done with :math:`Ā«LATEX_HEREĀ»`
Separate equations, which will be typeset on their own lines, are produced with:
.. math::
Ā«LATEX_HEREĀ»
:label: Ā«EQN_LABEL_HEREĀ»
Labelled separate equations are assigned an equation number, which may be
referenced elsewhere in the document (see Section 5.6.2). Omitting the :label:
above
will still produce an equation on its own line, except without an equation label.
Note that using latex formatting \begin{aligned}
ā¦ \end{aligned}
across multiple lines of equations will not work in conjunction with unique
equation labels for each separate line
(any embedded formatting &
characters will cause errors too). Latex alignment
will work however if you assign a single label for the multiple lines of equations.
There is a software tool āuniversal document converterā named pandoc
that we have found helpful in converting raw latex documents
into rst format. To convert a .tex
file into .rst
, from a terminal window type:
% pandoc f latex t rst o Ā«OUTPUT_FILENAMEĀ».rst Ā«INPUT_FILENAMEĀ».tex
Additional conversion options are available, for example if you have your equations or text in another format; see the pandoc documentation.
Note however we have found that a fair amount of cleanup is still
required after conversion, particularly regarding
latex equations/labels (pandoc has the unfortunate tendency to add
extra spaces, sometimes confusing the rst :math:
directive, other
times creating issues with indentation).
FiguresĀ¶
The syntax to insert a figure is as follows:
.. figure:: Ā«PATHNAME/FILENAMEĀ».*
:width: 80%
:align: center
:alt: Ā«TEXT DESCRIPTION OF FIGURE HEREĀ»
:name: Ā«MY_FIGURE_NAMEĀ»
The figure caption goes here as a single line of text.
figure::
: The figure file is located in subdirectory pathname
above; in practice, we have located figure files in subdirectories figs
off each manual chapter subdirectory.
The wildcard *
is used here so that different file formats can be used in the build process.
For vector graphic images, save a pdf
for the pdf build plus a svg
file for the html build.
For bitmapped images, gif
, png
, or jpeg
formats can be used for both builds,
no wildcard necessary, just substitute the actual extension
(see here for more info
on compatible formats). [Note: A repository for figure source .eps needs to be created]
:width:
: used to scale the size of the figure, here specified as 80% scaling factor
(check sizing in both the pdf and html builds, as you may need to adjust the figure size within the pdf file independently).
:align:
: can be right, center, or left.
:name:
use this name when you refer to the figure in the text, i.e. :numref:`Ā«MY_FIGURE_NAMEĀ»`
.
Note the indentation and line spacing employed above.
TablesĀ¶
There are two syntaxes for tables in reStructuredText. Grid tables are more flexible but cumbersome to create. Simple tables are easy to create but limited (no row spans, etc.). The raw rst syntax is shown first, then the output.
Grid Table Example:
++++
 Header 1  Header 2  Header 3 
+============+============+===========+
 body row 1  column 2  column 3 
++++
 body row 2  Cells may span columns.
++++
 body row 3  Cells may   Cells 
++ span rows.   contain 
 body row 4    blocks. 
++++
Header 1  Header 2  Header 3 

body row 1  column 2  column 3 
body row 2  Cells may span columns.  
body row 3  Cells may span rows. 

body row 4 
Simple Table Example:
===== ===== ======
Inputs Output
 
A B A or B
===== ===== ======
False False False
True False True
False True True
True True True
===== ===== ======
Inputs  Output  

A  B  A or B 
False  False  False 
True  False  True 
False  True  True 
True  True  True 
Note that the spacing of your tables in your .rst
file(s) will not match the generated output; rather,
when you build the final output, the rst builder (Sphinx) will determine how wide the columns need to be and space them appropriately.
Other text blocksĀ¶
Conventionally, we have used the rst āinline literalā syntax around any literal computer text (commands, labels, literal computer syntax etc.)
Surrounding text with double backquotes ``
results in output html like this
.
To set several lines apart in an whitespace box, e.g. useful for showing lines in from a terminal session, rst uses ::
to set off a āliteral blockā.
For example:
::
% unix_command_foo
% unix_command_fum
(note the ::
would not appear in the output html or pdf) A splashier way to outline a block, including a box label,
is to employ what is termed in rst as an āadmonition blockā.
In the manual these are used to show calling trees and for describing subroutine inputs and outputs. An example of
a subroutine input/output block is as follows:
This is an admonition block showing subroutine in/out syntax
An example of a subroutine in/out admonition box in the documentation is here.
An example of a calling tree in the documentation is here.
To show text from a separate file (e.g., to show lines of code, show comments from a Fortran file, show a parameter file etc.),
use the literalinclude
directive. Example usage is shown here:
.. literalinclude:: Ā«FILE_TO_SHOWĀ» :startat: String indicating where to start grabbing text :endat: String indicating where to stop grabbing text
Unlike the :filelink:
and :varlink:
directives, which assume a file path starting at the top of the MITgcm repository,
one must specify the path relative to the current directory of the file (for example, from the doc directory, it would require
../../
at the start of the file path to specify the base directory of the MITgcm repository).
Note one can instead use :startafter:
and :endbefore:
to get text from the file between (not including) those lines.
If one omits the startat
or startafter
, etc. options the whole file is shown.
More details for this directive can be found here.
Example usage in this documentation is here,
where the lines to generate this are:
.. literalinclude:: ../../model/src/the_model_main.F :startat: C Invocation from WRAPPER level... :endat: C  :: events.
Other style conventionsĀ¶
Units should be typeset in normal text, with a space between a numeric value and the unit, and exponents added with the :sup:
command.
9.8 m/s\ :sup:`2`
will produce 9.8 m/s^{2}. If the exponent is negative use two dashes 
to make the minus sign sufficiently long.
The backslash removes the space between the unit and the exponent. Similarly, for subscripts the command is :sub:
.
Alternatively, latex :math:
directives (see above) may also be used to display units, using the \text{}
syntax to display nonitalic characters.
 Todo: determine how to break up sections into smaller files
 discuss  lines
Building the manualĀ¶
Once youāve made your changes to the manual, you should build it locally to verify that it works as expected.
To do this you will need a working python installation with the following modules installed (use pip install Ā«MODULEĀ»
in the terminal):
 sphinx
 sphinxcontribbibtex
 sphinx_rtd_theme
Once these modules are installed you can build the html version of the manual by running make html
in the doc
directory.
To build the pdf version of the manual you will also need a working version of LaTeX that includes
several packages that are
not always found in minimal LaTeX installations. The command to build the pdf version is make latexpdf
, which should also be run in the doc
directory.
Reviewing pull requestsĀ¶
The only people with write access to the main repository are a small number of core MITgcm developers. They are the people that will eventually merge your pull requests. However, before your PR gets merged, it will undergo the automated testing on TravisCI, and it will be assessed by the MITgcm community.
Everyone can review and comment on pull requests. Even if you are not one of the core developers you can still comment on a pull request.
To test pull requests locally you should download the pull request branch. You can do this either by cloning the branch from the pull request:
git clone b Ā«THEIR_DEVELOPMENT_BRANCHNAMEĀ» https://github.com/Ā«THEIR_GITHUB_USERNAMEĀ»/MITgcm.git
where Ā«THEIR_GITHUB_USERNAMEĀ» is replaced by the username of the person proposing the pull request, and Ā«THEIR_DEVELOPMENT_BRANCHNAMEĀ» is the branch from the pull request.
Alternatively, you can add the repository of the user proposing the pull request as a remote to your existing local repository. Navigate to your local repository and type
git remote add Ā«THEIR_GITHUB_USERNAMEĀ» https://github.com/Ā«THEIR_GITHUB_USERNAMEĀ»/MITgcm.git
where Ā«THEIR_GITHUB_USERNAMEĀ» is replaced by the user name of the person who has made the pull request. Then download their pull request changes
git fetch Ā«THEIR_GITHUB_USERNAMEĀ»
and switch to the desired branch
git checkout track Ā«THEIR_GITHUB_USERNAMEĀ»/Ā«THEIR_DEVELOPMENT_BRANCHNAMEĀ»
You now have a local copy of the code from the pull request and can run tests locally. If you have write access to the main repository you can push fixes or changes directly to the pull request.
None of these steps, apart from pushing fixes back to the pull request, require write access to either the main repository or the repository of the person proposing the pull request. This means that anyone can review pull requests. However, unless you are one of the core developers you wonāt be able to directly push changes. You will instead have to make a comment describing any problems you find.
Software ArchitectureĀ¶
This chapter focuses on describing the WRAPPER environment within which both the core numerics and the pluggable packages operate. The description presented here is intended to be a detailed exposition and contains significant background material, as well as advanced details on working with the WRAPPER. The tutorial examples in this manual (see Section 4) contain more succinct, stepbystep instructions on running basic numerical experiments, of various types, both sequentially and in parallel. For many projects, simply starting from an example code and adapting it to suit a particular situation will be all that is required. The first part of this chapter discusses the MITgcm architecture at an abstract level. In the second part of the chapter we described practical details of the MITgcm implementation and the current tools and operating system features that are employed.
Overall architectural goalsĀ¶
Broadly, the goals of the software architecture employed in MITgcm are threefold:
 To be able to study a very broad range of interesting and challenging rotating fluids problems;
 The model code should be readily targeted to a wide range of platforms; and
 On any given platform, performance should be comparable to an implementation developed and specialized specifically for that platform.
These points are summarized in Figure 6.1, which conveys the goals of the MITgcm design. The goals lead to a software architecture which at the broadest level can be viewed as consisting of:
 A core set of numerical and support code. This is discussed in detail in Section 2.
 A scheme for supporting optional āpluggableā packages (containing for example mixedlayer schemes, biogeochemical schemes, atmospheric physics). These packages are used both to overlay alternate dynamics and to introduce specialized physical content onto the core numerical code. An overview of the package scheme is given at the start of Section 8.
 A support framework called WRAPPER (Wrappable Application Parallel Programming Environment Resource), within which the core numerics and pluggable packages operate.
This chapter focuses on describing the WRAPPER environment under which both the core numerics and the pluggable packages function. The description presented here is intended to be a detailed exposition and contains significant background material, as well as advanced details on working with the WRAPPER. The āGetting Startedā chapter of this manual (Section 3) contains more succinct, stepbystep instructions on running basic numerical experiments both sequentially and in parallel. For many projects simply starting from an example code and adapting it to suit a particular situation will be all that is required.
WRAPPERĀ¶
A significant element of the software architecture utilized in MITgcm is a software superstructure and substructure collectively called the WRAPPER (Wrappable Application Parallel Programming Environment Resource). All numerical and support code in MITgcm is written to āfitā within the WRAPPER infrastructure. Writing code to fit within the WRAPPER means that coding has to follow certain, relatively straightforward, rules and conventions (these are discussed further in Section 6.3.1).
The approach taken by the WRAPPER is illustrated in Figure 6.2, which shows how the WRAPPER serves to insulate code that fits within it from architectural differences between hardware platforms and operating systems. This allows numerical code to be easily retargeted.
Target hardwareĀ¶
The WRAPPER is designed to target as broad as possible a range of computer systems. The original development of the WRAPPER took place on a multiprocessor, CRAY YMP system. On that system, numerical code performance and scaling under the WRAPPER was in excess of that of an implementation that was tightly bound to the CRAY systemās proprietary multitasking and microtasking approach. Later developments have been carried out on uniprocessor and multiprocessor Sun systems with both uniform memory access (UMA) and nonuniform memory access (NUMA) designs. Significant work has also been undertaken on x86 cluster systems, Alpha processor based clustered SMP systems, and on cachecoherent NUMA (CCNUMA) systems such as Silicon Graphics Altix systems. The MITgcm code, operating within the WRAPPER, is also routinely used on large scale MPP systems (for example, Cray T3E and IBM SP systems). In all cases, numerical code, operating within the WRAPPER, performs and scales very competitively with equivalent numerical code that has been modified to contain native optimizations for a particular system (see Hoe et al. 1999) [HHA99] .
Supporting hardware neutralityĀ¶
The different systems mentioned in Section 6.2.1 can be categorized in many different ways. For example, one common distinction is between sharedmemory parallel systems (SMP and PVP) and distributed memory parallel systems (for example x86 clusters and large MPP systems). This is one example of a difference between compute platforms that can impact an application. Another common distinction is between vector processing systems with highly specialized CPUs and memory subsystems and commodity microprocessor based systems. There are numerous other differences, especially in relation to how parallel execution is supported. To capture the essential differences between different platforms the WRAPPER uses a machine model.
WRAPPER machine modelĀ¶
Applications using the WRAPPER are not written to target just one particular machine (for example an IBM SP2) or just one particular family or class of machines (for example Parallel Vector Processor Systems). Instead the WRAPPER provides applications with an abstract machine model. The machine model is very general; however, it can easily be specialized to fit, in a computationally efficient manner, any computer architecture currently available to the scientific computing community.
Machine model parallelismĀ¶
Codes operating under the WRAPPER target an abstract machine that is assumed to consist of one or more logical processors that can compute concurrently. Computational work is divided among the logical processors by allocating āownershipā to each processor of a certain set (or sets) of calculations. Each set of calculations owned by a particular processor is associated with a specific region of the physical space that is being simulated, and only one processor will be associated with each such region (domain decomposition).
In a strict sense the logical processors over which work is divided do not need to correspond to physical processors. It is perfectly possible to execute a configuration decomposed for multiple logical processors on a single physical processor. This helps ensure that numerical code that is written to fit within the WRAPPER will parallelize with no additional effort. It is also useful for debugging purposes. Generally, however, the computational domain will be subdivided over multiple logical processors in order to then bind those logical processors to physical processor resources that can compute in parallel.
TilesĀ¶
Computationally, the data structures (e.g., arrays, scalar variables, etc.) that hold the simulated state are associated with each region of physical space and are allocated to a particular logical processor. We refer to these data structures as being owned by the processor to which their associated region of physical space has been allocated. Individual regions that are allocated to processors are called tiles. A processor can own more than one tile. Figure 6.3 shows a physical domain being mapped to a set of logical processors, with each processor owning a single region of the domain (a single tile). Except for periods of communication and coordination, each processor computes autonomously, working only with data from the tile that the processor owns. If instead multiple tiles were allotted to a single processor, each of these tiles would be computed on independently of the other allotted tiles, in a sequential fashion.
Tile layoutĀ¶
Tiles consist of an interior region and an overlap region. The overlap region of a tile corresponds to the interior region of an adjacent tile. In Figure 6.4 each tile would own the region within the black square and hold duplicate information for overlap regions extending into the tiles to the north, south, east and west. During computational phases a processor will reference data in an overlap region whenever it requires values that lie outside the domain it owns. Periodically processors will make calls to WRAPPER functions to communicate data between tiles, in order to keep the overlap regions up to date (see Section 6.2.6). The WRAPPER functions can use a variety of different mechanisms to communicate data between tiles.
Communication mechanismsĀ¶
Logical processors are assumed to be able to exchange information between tiles (and between each other) using at least one of two possible mechanisms, shared memory or distributed memory communication. The WRAPPER assumes that communication will use one of these two styles. The underlying hardware and operating system support for the style used is not specified and can vary from system to system.
Distributed memory communicationĀ¶
Under this mode of communication there is no mechanism, at the application code level, for directly addressing regions of memory owned and visible to another CPU. Instead a communication library must be used, as illustrated below. If one CPU (here, CPU1) writes the value 8 to element 3 of array a, then at least one of CPU1 and/or CPU2 will need to call a function in the API of the communication library to communicate data from a tile that it owns to a tile that another CPU owns. By default the WRAPPER binds to the MPI communication library for this style of communication (see https://computing.llnl.gov/tutorials/mpi/ for more information about the MPI Standard).
CPU1  CPU2
====  ====

a(3) = 8  WHILE ( a(3) .NE. 8 )
CALL SEND( CPU2,a(3) )  CALL RECV( CPU1, a(3) )
 END WHILE

Many parallel systems are not constructed in a way where it is possible or practical for an application to use shared memory for communication. For cluster systems consisting of individual computers connected by a fast network, there is no notion of shared memory at the system level. For this sort of system the WRAPPER provides support for communication based on a bespoke communication library. The default communication library used is MPI. It is relatively straightforward to implement bindings to optimized platform specific communication libraries. For example the work described in Hoe et al. (1999) [HHA99] substituted standard MPI communication for a highly optimized library.
Communication primitivesĀ¶
Optimized communication support is assumed to be potentially available for a small number of communication operations. It is also assumed that communication performance optimizations can be achieved by optimizing a small number of communication primitives. Three optimizable primitives are provided by the WRAPPER.
 EXCHANGE This operation is used to transfer data between interior
and overlap regions of neighboring tiles. A number of different forms
of this operation are supported. These different forms handle:
 Data type differences. Sixtyfour bit and thirtytwo bit fields may be handled separately.
 Bindings to different communication methods. Exchange primitives select between using shared memory or distributed memory communication.
 Transformation operations required when transporting data between different grid regions. Transferring data between faces of a cubesphere grid, for example, involves a rotation of vector components.
 Forward and reverse mode computations. Derivative calculations require tangent linear and adjoint forms of the exchange primitives.
 GLOBAL SUM The global sum operation is a central arithmetic
operation for the pressure inversion phase of the MITgcm algorithm.
For certain configurations, scaling can be highly sensitive to the
performance of the global sum primitive. This operation is a
collective operation involving all tiles of the simulated domain.
Different forms of the global sum primitive exist for handling:
 Data type differences. Sixtyfour bit and thirtytwo bit fields may be handled separately.
 Bindings to different communication methods. Exchange primitives select between using shared memory or distributed memory communication.
 Forward and reverse mode computations. Derivative calculations require tangent linear and adjoint forms of the exchange primitives.
 BARRIER The WRAPPER provides a global synchronization function called barrier. This is used to synchronize computations over all tiles. The BARRIER and GLOBAL SUM primitives have much in common and in some cases use the same underlying code.
Memory architectureĀ¶
The WRAPPER machine model is aimed to target efficient systems with highly pipelined memory architectures and systems with deep memory hierarchies that favor memory reuse. This is achieved by supporting a flexible tiling strategy as shown in Figure 6.6. Within a CPU, computations are carried out sequentially on each tile in turn. By reshaping tiles according to the target platform it is possible to automatically tune code to improve memory performance. On a vector machine a given domain might be subdivided into a few long, thin regions. On a commodity microprocessor based system, however, the same region could be simulated use many more smaller subdomains.
SummaryĀ¶
Following the discussion above, the machine model that the WRAPPER presents to an application has the following characteristics:
 The machine consists of one or more logical processors.
 Each processor operates on tiles that it owns.
 A processor may own more than one tile.
 Processors may compute concurrently.
 Exchange of information between tiles is handled by the machine (WRAPPER) not by the application.
Behind the scenes this allows the WRAPPER to adapt the machine model functions to exploit hardware on which:
 Processors may be able to communicate very efficiently with each other using shared memory.
 An alternative communication mechanism based on a relatively simple interprocess communication API may be required.
 Shared memory may not necessarily obey sequential consistency, however some mechanism will exist for enforcing memory consistency.
 Memory consistency that is enforced at the hardware level may be expensive. Unnecessary triggering of consistency protocols should be avoided.
 Memory access patterns may need to be either repetitive or highly pipelined for optimum hardware performance.
This generic model, summarized in Figure 6.7, captures the essential hardware ingredients of almost all successful scientific computer systems designed in the last 50 years.
Using the WRAPPERĀ¶
In order to support maximum portability the WRAPPER is implemented primarily in sequential Fortran 77. At a practical level the key steps provided by the WRAPPER are:
 specifying how a domain will be decomposed
 starting a code in either sequential or parallel modes of operations
 controlling communication between tiles and between concurrently computing CPUs.
This section describes the details of each of these operations. Section 6.3.1 explains the way a domain is decomposed (or composed) is expressed. Section 6.3.2 describes practical details of running codes in various different parallel modes on contemporary computer systems. Section 6.3.3 explains the internal information that the WRAPPER uses to control how information is communicated between tiles.
Specifying a domain decompositionĀ¶
At its heart, much of the WRAPPER works only in terms of a collection of tiles which are interconnected to each other. This is also true of application code operating within the WRAPPER. Application code is written as a series of compute operations, each of which operates on a single tile. If application code needs to perform operations involving data associated with another tile, it uses a WRAPPER function to obtain that data. The specification of how a global domain is constructed from tiles or alternatively how a global domain is decomposed into tiles is made in the file SIZE.h. This file defines the following parameters:
File: model/inc/SIZE.h
Together these parameters define a tiling decomposition of the style
shown in Figure 6.8. The parameters sNx
and sNx
define the size of an individual tile. The parameters OLx
and OLy
define the maximum size of the overlap extent. This must be set to the
maximum width of the computation stencil that the numerical code
finitedifference operations require between overlap region updates.
The maximum overlap required by any of the operations in the MITgcm
code distributed at this time is four grid points (some of the higherorder advection schemes
require a large overlap region). Code modifications and enhancements that involve adding wide
finitedifference stencils may require increasing OLx
and OLy
.
Setting OLx
and OLy
to a too large value will decrease code
performance (because redundant computations will be performed),
however it will not cause any other problems.
The parameters nSx
and nSy
specify the number of tiles that will be
created within a single process. Each of these tiles will have internal
dimensions of sNx
and sNy
. If, when the code is executed, these
tiles are allocated to different threads of a process that are then
bound to different physical processors (see the multithreaded
execution discussion in Section 6.3.2), then
computation will be performed concurrently on each tile. However, it is
also possible to run the same decomposition within a process running a
single thread on a single processor. In this case the tiles will be
computed over sequentially. If the decomposition is run in a single
process running multiple threads but attached to a single physical
processor, then, in general, the computation for different tiles will be
interleaved by system level software. This too is a valid mode of
operation.
The parameters sNx
, sNy
, OLx
, OLy
,
nSx
and nSy
are used extensively
by numerical code. The settings of sNx
, sNy
, OLx
, and OLy
are used to
form the loop ranges for many numerical calculations and to provide
dimensions for arrays holding numerical state. The nSx
and nSy
are
used in conjunction with the thread number parameter myThid
. Much of
the numerical code operating within the WRAPPER takes the form:
DO bj=myByLo(myThid),myByHi(myThid)
DO bi=myBxLo(myThid),myBxHi(myThid)
:
a block of computations ranging
over 1,sNx +/