Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences
Published:https://doi.org/10.1098/rsta.2013.0340

    Abstract

    Here, I examine some of the many varied ways in which sustained global ocean observations are used in numerical modelling activities. In particular, I focus on the use of ocean observations to initialize predictions in ocean and climate models. Examples are also shown of how models can be used to assess the impact of both current ocean observations and to simulate that of potential new ocean observing platforms. The ocean has never been better observed than it is today and similarly ocean models have never been as capable at representing the real ocean as they are now. However, there remain important unanswered questions that can likely only be addressed via future improvements in ocean observations. In particular, ocean observing systems need to respond to the needs of the burgeoning field of near-term climate predictions. Although new ocean observing platforms promise exciting new discoveries, there is a delicate balance to be made between their funding and that of the current ocean observing system. Here, I identify the need to secure long-term funding for ocean observing platforms as they mature, from a mainly research exercise to an operational system for sustained observation over climate change time scales. At the same time, considerable progress continues to be made via ship-based observing campaigns and I highlight some that are dedicated to addressing uncertainties in key ocean model parametrizations. The use of ocean observations to understand the prominent long time scale changes observed in the North Atlantic is another focus of this paper. The exciting first decade of monitoring of the Atlantic meridional overturning circulation by the RAPID-MOCHA array is highlighted. The use of ocean and climate models as tools to further probe the drivers of variability seen in such time series is another exciting development. I also discuss the need for a concerted combined effort from climate models and ocean observations in order to understand the current slow-down in surface global warming.

    1. Introduction

    The Challenger Society for Marine Science and the UK Scientific Committee on Oceanic Research asked me to consider the role of sustained marine observations in climate modelling and to present a perspective on the future. With such a broad brief, it would be possible to write several very different contributions each focused on a particular facet of the interaction between ocean observations and numerical models. In this personal perspective, I focus on the use of ocean observations for ocean forecasting and climate prediction, across time scales from days to decades. All these activities are being carried out at UK research institutions and I will use examples throughout this paper, with an inevitable bias towards work that my colleagues at the Met Office are involved in. I focus on the modelling and prediction of physical variables in the ocean and surface climate impacts, but I point out that excellent progress is being made by the integration of marine biological components into the latest Earth system climate models (touched on by Mieszkowska et al. [1] and Henson [2] in this Theme Issue).

    The ocean component of climate models is resolving ever finer spatial scales as supercomputing power increases and this presents new opportunities and challenges for comparison with ocean observations. The equivalent contribution focusing on climate modelling, at the 2011 Challenger Society Prospectus meeting, was provided by Shuckburgh [3] and it is to this that I refer the reader for a comprehensive overview of the key developments in ocean modelling over the past 20 years. In this article, I focus primarily on describing what is ‘state-of-the-art’ in terms of the interaction between ocean observations and climate models. While studying the global oceans is becoming an increasingly collaborative international effort, both in terms of observations and modelling, I will focus on the strong contributions made by UK-based scientists.

    The article is structured to step through progressively longer time scales of variability and prediction. I shall show that the physical processes governing variability in the coupled ocean–atmosphere system change and that this places different requirements on our observation of the ocean. In §2, I examine how ocean observations are used in the prediction of ocean and atmospheric weather and in doing so introduce many of the key ocean observation platforms. Observation of longer modes of ocean variability are considered in §3 where I discuss their impact on monthly to seasonal climate prediction and the need to better understand ocean–atmosphere coupling. Understanding the strong multi-decadal variability observed in the North Atlantic is the focus of §4, using evidence from models, observations and decadal climate predictions. In §5, I briefly discuss the role of the ocean in modulating global climate change and what new demands this places on ocean observing systems. Finally, I summarize this perspective in §6.

    2. Operational ocean forecasting and coupled ocean–atmosphere numerical weather prediction

    Forecasts of the state of the ocean (particularly the sea surface) are made operationally every day in an analogous way to numerical weather forecasts. These short-term (typically out to one week ahead) ocean forecasts support a number of commercial activities (e.g. international ship routing), modelling of environmental monitoring (e.g. during oil spills) and naval applications. To make useful forecasts for many user applications, the ocean model needs to be run at a sufficiently high spatial resolution (fractions of a degree latitude/longitude) to start to resolve the ocean mesoscale. This normally requires running at least an eddy-permitting resolution for a global model, with the possibility of higher resolution nested regional models. In order to initialize such a model, a combination of in situ and space-based observations is used.

    (a) Ocean observing system experiments

    To compare the relative contribution of each type of observation, and hence try to quantify their value, ocean observing system experiments (OSEs) can be used. These typically involve sequentially withholding one type of ocean data at a time and assessing the impact this has on either the model ocean state (background) or on forecast skill. Unfortunately, these types of experiments are very expensive to run and so are seldom undertaken. To address this on the short-term ocean forecasting time scales, the follow on to the Global Ocean Data Assimilation Experiment (GODAE) project, GODAE-OceanView, proposed that multiple centres run near-real time OSEs. These would assess the relative contribution of each of the current ocean observational platforms to initializing ocean forecasts over a period of about a month. As a first step towards community-wide OSEs, a study was led by Lea [4], using the Met Office Forecasting Ocean Assimilation Model (FOAM), from which I will use some examples in the following sections. Here, I shall highlight some of the tentative findings of an OSE study and so explore possible impacts the different observing platforms have on initializing an ocean model. It is however important to highlight that OSEs implicitly assume that the state estimate using all observations is the perfect truth. As this cannot be the case, these OSE results can only give the difference relative to the imperfect state estimation obtained using all observations (which will depend upon the details of the forecasting system used and the quality of the observational data).

    In each month, for six months, a different ocean observation type was withheld from the assimilation. Here, I shall focus on the impact of the Argo array, the TAO/TRITON array and altimeter data. The full set of ocean observation platforms tested also included: expendable bathythermographs, the JASON-2 altimeter alone and AVHRR sea surface temperature (SST) data. Each experiment was run in parallel with the operational FOAM system that included all observations. At the heart of FOAM is the Nucleus for European Modelling of the Ocean (NEMO) model, run at 0.25° resolution (ORCA025). Assessing the impact of one ocean observation platform each month (see [4] for more details) successfully spreads the computational load of the OSEs over a longer period of time. In doing so however, one has to assume that the impact of each observation type is insensitive to the month in which it is undertaken. Another potential difficulty with these experiments is that they can quickly become out of date as the operational system is improved. For example, these OSEs were performed using the version of FOAM prior to the implementation of the NEMOVAR [5,6] data assimilation scheme.

    (i) The Argo array

    It is no exaggeration to say that the establishment of the Argo array [7] over the past decade has revolutionized sub-surface global observations of temperature and salinity in the top 2000 m of the ocean. In 2007, the freely drifting Argo floats reached the original array design intention of 3000 floats distributed approximately uniformly throughout the world's open oceans. Many countries have contributed by funding and deploying floats, as shown in figure 1. The UK contribution (black dots in figure 1) is currently focused in the Atlantic and Southern Ocean. As I shall discuss throughout this paper, Argo data are used for many applications including climate monitoring and to initialize seasonal and decadal climate predictions. However, I shall first examine the value of the Argo data in an operational ocean forecasting context.

    Figure 1.

    Figure 1. The distribution of the Argo array floats, colour-coded by the country that deployed them, in July 2013. Note that the UK funded floats are black. From http://argo.jcommops.org/maps.html. (Online version in colour.)

    Argo data provide a strong constraint on ocean temperatures beneath the mixed layer at depths not constrained by satellite surface temperature observations. Without the assimilation of the argo data (figure 2a), differences of order ±2 K develop in the Northern Hemisphere summer when the mixed layer is shallower than 30 m. By contrast, the Southern Hemisphere winter has a relatively deeper mixed layer, largely extending down below 30 m, and hence the Argo data and surface temperature observations both constrain temperatures, and the impact of the Argo data is reduced. However, for deeper layers, such as 100 m (figure 2b), the impacts are near global with biases in excess of 2 K without the Argo array.

    Figure 2.

    Figure 2. The impact of the assimilation of Argo data (full assimilation minus ‘no Argo’) on the state of the FOAM system as assessed after withholding Argo during July 2011. Shown are: temperature (K) differences at 30 m (a) and 100 m (b) depths, difference in salinity (psu) at 100 m (c) and (d) shows the difference in SSH (m). Adapted from [4], published by John Wiley and Sons. (Online version in colour.)

    Even more striking however, is the impact on model salinity as shown in figure 2c at 100 m depth. The Argo array is currently the prime source of salinity measurements in the FOAM system and hence its removal leads to considerable differences even in surface salinity [4]. This situation is likely to change in the near future as surface salinity data products from satellite instruments, such as the European Space Agency's Soil Moisture and Ocean Salinity (SMOS [8]) mission, become more widely used. These will provide higher resolution (both spatial and temporal) information about ocean surface salinity; however in situ observations of near surface salinity from Argo floats will still likely be essential for calibration of such satellite products.

    Perhaps more surprising is the impact that Argo has on global sea-surface height (SSH) in the model, especially given that altimeter data are being explicitly assimilated in this system. Figure 2d shows that the impact of removing the Argo data is to produce large-scale regions of bias in the SSH. These likely arise because the altimeter data cannot fully adjust the large-scale (basin) density-driven circulations. Here, Argo clearly works in concert with the altimeter data to provide accurate large-scale initialization of the model dynamic SSH.

    (ii) The Tropical Atmosphere Ocean/Triangle Trans-Ocean Buoy Network array

    The TAO (Tropical Atmosphere Ocean) array [9] was completed in 1994 to monitor conditions in the tropical Pacific with the aim of improving the detection and prediction of the El Niño-Southern Oscillation (ENSO). It was renamed the TAO/TRITON (Tropical Atmosphere Ocean/Triangle Trans-Ocean Buoy Network) array in 2000. It consists of approximately 70 moorings spread in longitude across the Pacific within approximately 5° of the equator. It returns near real-time measurements of temperature and salinity (T & S) in the top 500 m of the ocean, as well as information on winds and surface fluxes.

    The impact of the T & S data provided by the TAO/TRITON array in the FOAM assimilation is illustrated in figure 3. This cross section shows how the temperature biases propagate away from the TAO/TRITON mooring locations (black vertical lines) as the time without the TAO/TRITON increases from 1 March when the data were withheld. By the end of the month, a bias in the thermocline depth can be seen developing in the tropical Pacific. Although some of these changes are likely to be chaotic, there is a slight tendancy for the TAO/TRITON data to sharpen the thermocline which is likely beneficial as models tend to have too diffuse a thermocline [4]. Such biases in the tropical Pacific could be crucial, as the correct initialization of ENSO dynamics is very important for accurate predictions of climate variability in many regions on longer time scales (as discussed in §3). One may have naively anticipated a rather modest impact of the TAO/TRITON array given that Argo floats now provide T & S data at the same depths. However, at any given time, there are only a limited number of Argo floats within ±5° of the equator in the Pacific and each of these only takes profiles every 10 days. The intensive and real-time monitoring of the tropical Pacific provided by the TAO/TRITON array appears to make significant differences (at least in the FOAM system) to the initialization of this crucial region.

    Figure 3.

    Figure 3. The impact of the TAO/TRITON array (full assimilation minus ‘no TAO/TRITON’) on the sub-surface temperatures (K) in the FOAM system in March 2011. A cross section in the Pacific within ±5° from the equator is plotted for three different days through the month. Black vertical bars show the mooring locations. Adapted from [4], published by John Wiley and Sons. (Online version in colour.)

    It is worth mentioning that more tropical moored buoys are being deployed outside of the Pacific. The Prediction and Research Moored Array in the Atlantic now consists of 17 buoys in the tropical Atlantic [10] and is aimed at better understanding of the modes of tropical Atlantic climate variability. The Research Moored Array for African–Asian–Australian Monsoon Analysis and Prediction (RAMA) array is planned to consist of 43 buoys in the Indian Ocean [11] and at the time of writing (November 2013) it is currently 70% complete. As its acronym suggests, RAMA will provide important additional data to better initialize model monthly and seasonal predictions of the Indian and African monsoons.

    (iii) Altimeter data

    In the satellite era, space-based observations are playing an increasingly important role in ocean observations. Examples of satellite-derived observational products include: SST, sea ice concentration, surface winds, ocean colour and, more recently, surface salinity. I will focus briefly on the impact of satellite altimeter data. Altimeters use radar to infer the SSH and this provides information about surface ocean currents and sea-level change. It also provides information about the vertically integrated sub-surface ocean, for example anomalously warm sub-surface conditions will lead to high-dynamic topography. Routine altimeter observations started in 1992 with TOPEX/Poseidon, then with the launch of the JASON-1 mission in 2001. Both of these were joint missions between the USA and France. In 2008, JASON-2 was launched as a joint Eumetsat mission (with significant UK contribution) and similarly JASON-3 is scheduled to launch in 2015.

    In respect of operational ocean forecasting, the ocean topography revealed by the altimeter SSH data provides essential information regarding the ocean mesoscale. When the altimeter data are withheld in the FOAM OSEs, the model eddy-field making up the ocean mesoscale is no longer sufficiently constrained by the remaining observations and hence the information about ocean currents is in error [4]. However, the impact is not limited to the surface as can be seen in figure 4 where significant local temperature differences of ± 2 K or more (at 100 m depth) are observed.

    Figure 4.

    Figure 4. The impact of assimilating satellite altimeter data (full assimilation minus ‘no altimeter’) on the temperatures (K) at 110 m depth in the FOAM system for the last day of May 2011. Note that the high-latitude regions show limited impact as the altimeter data are not used in regions of weak stratification. Adapted from [4], published by John Wiley and Sons. (Online version in colour.)

    (b) Surface drifting buoys for model validation

    So far, I have examined how different observational platforms provide data on the state of the ocean which can then be assimilated into an ocean model. The aim is to provide an accurate initialization of the ocean dynamics and hence produce useful forecast products. However, assessing how well the model ocean has been initialized can be a non-trivial and often degenerate problem if the same observations are used for both assimilation and validation. It can therefore be useful to have an additional type of ocean observation that is not assimilated by the model and so can be used for independent validation of the model state. On the operational ocean forecasting time scales of the FOAM system, surface drifting buoys are one such source. These buoys drift freely at the surface of the ocean and hence directly experience the full complexity of the ocean mesoscale currents and eddies. The positions reported by surface buoys are compared to the positions calculated by the model ocean dynamics. This is shown graphically in figure 5, where the evolution of the surface currents in the Kuroshio current region is examined over two weeks and qualitatively agrees with the tracks of surface drifting buoys.

    Figure 5.

    Figure 5. Surface drifting buoys are used for independent model validation in the FOAM system, as graphically illustrated here. Each panel shows the model SSH field (m) for a particular day in November 2011. The black dots mark the reported positions of the drifting buoys for the period 3 days either side of this date. The buoy marked ‘1’ is directly in the Kuroshio current, as can be seen by the rapid progress of this buoy during the 7-day window. A fortnight later, the right-hand panel shows the same ocean buoy has progressed rapidly eastwards in the current. It has also been advected around a strong eddy which appears to be well represented by the model dynamics. Farther from the boundary current, the buoy labelled ‘2’ can be seen circulating around a strong warm-core eddy which is also initialized by the FOAM system (adapted by the author from animations kindly produced by Jennie Waters and Ed Blockley, Crown Copyright). (Online version in colour.)

    (c) Coupled ocean–atmosphere modelling

    So far, I have discussed the forecasting of the ocean. While this is valuable to many users of the sea (e.g. ship routing, naval applications and the offshore oil and gas industry), of more general utility to society are forecasts of the atmosphere and particularly so over land. Typically as the forecast lead time gets longer, the ocean has a greater contribution to play in the factors that drive skilful forecasts of the atmosphere. For example, for the typical 5-day forecast window of numerical weather prediction, the initial conditions of the atmosphere are by far the most important. This is true to the extent that currently most weather forecast systems do not include a dynamical ocean component and instead persist the observed SST anomalies. However, for some regions of the planet, for some weather phenomena, and some modes of ocean–atmosphere variability, coupling to a dynamical ocean model would likely have a positive impact on forecast skill even for short forecast lead times. For example, the UK Met Office is currently working towards coupling a dynamical ocean model into its numerical weather prediction system (T. C. Johns 2013, personal communication).

    Hurricanes, the most intense of Atlantic tropical storms, are one atmospheric phenomena that can strongly interact with the surface of the ocean, and even the ocean sub-surface, on short (daily) time scales. Altimeter observations of SSH can be used to identify areas of anomalously high-dynamic topography and hence diagnose the heat potential available to drive tropical storm intensification. A good example is hurricane Katrina which hit continental USA, near New Orleans, in August 2005. Katrina resulted in nearly 2000 fatalities and damage to buildings and infrastructure costing over $100 billion [12]. The path and wind speeds for hurricane Katrina are plotted in figure 6a, along with the map of tropical cyclone heat potential diagnosed from altimeter observations. When the hurricane passes over the high-dynamic topography, caused by a warm Gulf of Mexico loop current eddy, it is clearly seen to intensify. How this is modelled/forecast depends crucially on whether the atmospheric model is coupled to a dynamic ocean or not. If the atmosphere is using fixed SST boundary conditions, then the SSTs will not be modified when the hurricane takes energy out of the ocean, hence the ocean is essentially acting as an infinite heat source. However, if the model is coupled to a dynamic ocean, then the upper ocean will cool as the hurricane passes over which can then potentially feed back on the hurricane and on forecasts after the passage of the hurricane. This is illustrated in figure 6b which shows a cold wake in SSTs in the days after the passage of hurricane Katrina. It is also worth noting that, at least in the case of hurricane Katrina, altimeter instruments were able to observe the build-up of a storm surge [15]. Such observations in the future could potentially provide valuable information to model the local impact of the surge on coastal buildings and infrastructure.

    Figure 6.

    Figure 6. Extreme ocean–atmosphere coupling. (a) Use of altimeter products to calculate tropical cyclone heat potential (TCHP) and hence help predict the intensification of hurricanes, as was the case for hurricane Katrina in 2005. Highest wind speeds are shown by the largest (red) circles along the hurricane path. Figure adapted from [13]. (b) Two days after the passage of hurricane Katrina, regions of anomalously low SST (K) can be seen along the path of the hurricane. Plotted by the author from Operational SST and Sea Ice Analysis SST analysis [14]. (Online version in colour.)

    On longer forecast time scales of weeks to a month or more ahead (so-called intraseasonal variability), the ocean has a greater potential role to play in modes of weather variability. The Madden–Julian Oscillation (MJO) is the prime mode of variability in the tropics on this time scale. Recent evidence suggests that using a dynamically coupled ocean–atmosphere model can improve the forecast of the MJO via a better representation of phase relationship between SST and convection, along with the ability to model the propagation of oceanic waves during the forecast period [16].

    3. Development of monthly to seasonal climate prediction and the challenges of ocean–atmosphere coupling

    (a) Ocean resolution in the latest climate models

    Global climate models used to simulate longer term climate variability and change, from seasons to centuries, are computationally expensive to run. The climate models assessed in the Intergovernmental Panel on Climate Change fifth assessment report (IPCC AR5) typically used a 1° resolution ocean model—although often with higher resolution in the tropics. At these resolutions, the representation of the ocean mesoscale is very limited. However, with the ever increasing capacity of supercomputers, the latest state-of-the-art climate models are now progressing to sufficient ocean resolutions to be eddy permitting. Typically, this is a resolution of approximately 0.25°.

    In figure 7, I compare monthly fields of sub-surface ocean temperature, using a development version of the HadGEM3 climate model, at two different resolutions. Clearly, the 0.25° ocean fields show more complexity than the equivalent 1° fields. The Gulf Stream front is significantly sharper and contains more eddy activity, the Agulhas rings in the South Atlantic are explicitly simulated, as are eddies shed by the Gulf of Mexico loop current. While the simulation of these observed features is promising and surely desirable, what benefit could this provide for our simulation and understanding of climate variability? This is a question that I shall investigate in the coming sections. The explicit representation of the Gulf of Mexico loop current eddies is an interesting example given the discussion in the previous section regarding their interaction with Atlantic hurricanes. Spatially resolving these features has the potential to change the frequency and intensity of US land-falling hurricanes.

    Figure 7.

    Figure 7. Improvements in the resolution of the ocean component of global climate models. The finer resolution of the ORCA025 grid (a) is eddy permitting and allows for sharper ocean fronts than the lower ORCA1 resolution (b). In particular, note the features highlighted by the red boxes: Gulf Stream eddies, the Aguhlus rings and the Gulf of Mexico loop current eddies simulated by the ORCA025 model. A single monthly mean temperature field at 370 m depth is shown from control run simulations of the two models. Note the figure is only meant to illustrate the additional ocean structures that the current generation of ocean models can simulate and not differences in model climatology. Created by the author, Crown Copyright. (Online version in colour.)

    (b) Seasonal climate prediction

    The strongest contributor to forecast skill on the seasonal forecasting time scale, in most of the tropics (and parts of the extra-tropics), is the ENSO phenomenon. Thanks to both the good spatial and temporal coverage provided by the TAO/TRITON array in the equatorial Pacific, and developments in the model representation of ENSO dynamics, skill in predicting ENSO is very high out to about six months ahead [17]. Translating skill in predicting the phase of ENSO behaviour into skill in the atmosphere over land is a non-trivial task. However, continued work on improving the remote teleconnections between ENSO and regional climate impacts has made considerable progress and now forecasts of rainfall in Africa, India and tropical storm activity are all showing skill [18].

    Predictability in mid-latitude climate is of particular interest to Europe and North America and has been more elusive. The prime driver of wintertime climate in these regions is the North Atlantic Oscillation (NAO). The NAO is typically measured by the difference in pressure between the Azores and Iceland and is a metric for the strength of the mid-latitude atmospheric jet. A negative NAO leads to a weaker jet and anomalously easterly flow which in turn impacts on surface climate, for example leading to lower than normal temperatures in northern Europe. This is illustrated in figure 8. The NAO is notoriously hard to predict however, with little or no skill seen in the hindcasts (forecasts made in the past) of current operational seasonal forecasting systems (e.g. [19]). However, new evidence from the Met Office GloSea5 seasonal prediction system [18] suggests that the NAO may indeed be highly predictable. GloSea5 obtains a correlation coefficient of 0.6 [20] between the forecast and observed wintertime (December to February) NAO over the 20-year hindcast (forecast period 1992–2011, with hindcasts started from the beginning of November; figure 9). This level of skill in the NAO is very encouraging and will hopefully lead to many useful forecast products, e.g. for the energy and transport sectors. One of the main improvements in GloSea5, over previous versions, is the upgrade to a higher resolution ocean model. The GloSea5 system uses the relatively high-resolution 0.25° NEMO ocean (as in figure 7). This removed some of the bias in North Atlantic SST leading to a positive impact on the simulation of winter-time atmospheric blocking events [21].

    Figure 8.

    Figure 8. Schematic illustration of the winter NAO, with the associated changes in the strength of the sub-polar jet and the consequent surface climate impacts. Crown Copyright. (Online version in colour.)

    Figure 9.

    Figure 9. The skill in predicting the winter (December to February) NAO from hindcasts using the Met Office GloSea5 seasonal prediction system. The black line is the observed NAO and the red (grey) line is the ensemble mean prediction with individual members also plotted. After [20]. Crown Copyright. (Online version in colour.)

    In figure 10, the relative strength of the contributions of different drivers to the NAO signal is shown. Of relevance to this paper are the oceanic drivers: tropical ENSO variability, the North Atlantic upper ocean heat content and the Arctic sea-ice extent (here assessed in the Kara Sea region). The model produces a similar relationship to the observations for the connection with ENSO. However, the connection with the surface temperatures in both the Atlantic and Arctic regions appears to be much weaker than observed. This suggests that much of the model skill originates from the remote teleconnection with ENSO. This likely occurs via recently discovered global atmospheric teleconnection pathways where the stratosphere plays a key role [22]. This is interesting as it requires both the accurate initialization of the tropical Pacific ocean and the development of climate models that resolve the stratosphere in order to obtain skilful predictions of extra-tropical North Hemisphere winter climate.

    Figure 10.

    Figure 10. Examining the different potential drivers of the NAO, by compositing the (December to February) mean sea-level pressure field, based on different oceanic and atmospheric indices from the preceding November. The mean sea-level pressure (hPa) field is shown for composites of the third strongest minus the third weakest of each index. A similar analysis is carried out for the observations (right). In all panels, the model appears to have the right sign of response to these drivers; however, only the ENSO connection is approaching the right signal strength—the other three indices show too weak a model response. After [20]. Crown Copyright. (Online version in colour.)

    There is a strong dependence of seasonal prediction skill, both in the tropics and increasingly in the extratropics, on correctly initializing ENSO in the tropical Pacific. As was discussed in §2, the TAO/TRITON array is the key component of the ocean observing system for ENSO. It should be a concern then that the TAO/TRITON array has suffered considerable degradation recently and many of the 70 moored buoys are no longer returning data. In fact, as at the start of October 2013, I believe the array has not been serviced for 18 months. This is linked to budget cuts facing NOAA that are placing harsh constraints on the ship-time needed for servicing the array. This is a stark illustration that sustaining even the most demonstrably useful ocean observations is not immune to political and financial pressures.

    (c) Challenges for improving the representation of ocean–atmosphere coupling and the need for better estimates of air–sea fluxes

    An interesting result from the GloSea5 seasonal hindcast experiments is that the model ocean does not seem to drive the model atmosphere with the same strength as that in the real world. Many tens of ensemble members (versions of the model started from virtually the same initial conditions but with some random perturbation) are needed in order to achieve the skilful NAO forecast discussed above (individual members are red crosses in figure 9). The fact that so many ensemble members are required to capture the signal in the model is indicative of a very low signal-to-noise ratio and hence suggestive of weak ocean–atmosphere coupling in the model. Theoretical considerations suggest that increasing the ensemble size still further would increase the skill to a level that asymptotes at approximately a correlation score of about 0.8 [20].

    Modelling studies that prescribe the ocean SST do demonstrate an impact in the atmosphere [2325]. Further support for the notion that ocean–atmosphere coupling is too weak in fully coupled models comes from the large (time lagged) ensembles required to produce skilful predictions of Atlantic hurricane frequency in decadal predictions [26]. Additionally, the weak lagged response of models to solar forcing changes [27], relative to the observed relationship, could be interpreted as too weak an ocean–atmosphere coupling. Of course, one way around this problem is to run large forecast ensembles (as is done in seasonal forecasting [20]).

    The sharpness of oceanic fronts may be key to propagating signals into the atmosphere. Minobe et al. [28] found that significantly degrading the spatial resolution of the driving SST forcing fields led to a sharp decrease in the amount of precipitation over the Gulf Stream region (figure 11). This is because sharp SST gradients in the resolved Gulf Stream region lead to anomalous adjustments to the local pressure field, that in turn cause much stronger wind convergence in this region and hence the narrow band of strong precipitation. Even more interestingly, this response may not be restricted to the marine boundary layer, but instead may extend up into the free troposphere (figure 11d) and impact the large-scale atmospheric circulation. This work illustrates how, as we continue to increase the spatial resolution of ocean models, hence resolving ever smaller structures and gradients in SST, we may expect to see an increase in ocean–atmosphere coupling strength. For example, the Met Office, in collaboration with the National Oceanography Centre Southampton, is currently developing a 1/12th of a degree ocean model for use in the next generation coupled climate models.

    Figure 11.

    Figure 11. The impact of the spatial resolution of SST field on the atmosphere. (a) The observed relationship between the strong gradient in the SST found in the Gulf Stream (shown by the black contours) and the observed precipitation rate (shaded). (b) Models driven with high-resolution SST data show a similar strength of precipitation. (c) If the SST field is low resolution, the SST gradients are weaker and this strongly damps the precipitation rate. (d) This response is not restricted to the atmospheric boundary layer but extends into the free troposphere, potentially driving larger scale atmospheric anomalies. Adapted from [28], permission from Macmillan Publishers Ltd, copyright (2008). (Online version in colour.)

    Increases in model resolution alone may not be sufficient to address the question of ocean–atmosphere coupling however, and more work is likely needed to carefully compare observed with model fluxes of heat. The ocean surface boundary layer is key in controlling the exchange of heat, water, carbon and nutrients between the deep ocean and the atmosphere. The National Environment Research Council (NERC) funded Ocean Surface Mixing, Ocean Sub-mesoscale Interaction Study (OSMOSIS; http://www.bodc.ac.uk/projects/uk/osmosis/) aims to develop new and improved model parametrizations that are more realistic. This will be achieved by taking high vertical resolution measurements sustained over an annual cycle, using continuous mooring and ocean glider measurements. The ultimate aim of OSMOSIS is to improve weather and climate predictions.

    One region which is particularly lacking in ocean flux estimates is the Southern Ocean. This is unfortunate as many of the latest generation of climate models appear to have a strong warm bias in the Southern Ocean; this can reach more than 5 K locally in some models. Rectifying SST biases in a coupled model is unfortunately non-trivial as the root cause may lie in the representation of atmospheric processes (e.g. clouds or winds), or ocean processes (e.g. vertical mixing) and/or a combination of nonlinear ocean–atmosphere feedbacks. Furthermore, the lack of in situ ocean fluxes leads to uncertainty in the Southern Ocean flux measurements which in turn makes closing the global heat budget difficult. Many different flux products have been developed using in situ data, satellite data, numerical models and combinations of these. However, there are significant differences between these products [29]. Coverage of in situ observations of fluxes in the Southern Ocean is generally very poor and particularly so in austral winter. In fact, the first successful air–sea flux mooring measurements of the Southern Ocean were only carried out in 2010 using the Southern Ocean Flux Station (SOFS) [30]. This gave us the first estimate of the annual air–sea flux climatology (a small net ocean heat loss of 10 W m−2) and the seasonal cycle, including several extreme turbulent heat loss events. That SOFS deployment was extremely valuable but it only lasted 1 year and sampled a single location in a very large ocean. Sustained air–sea flux observations at more sites in the Southern Ocean are needed in order to better estimate the climatological heat fluxes and help to resolve the causes of model biases in this region.

    4. Towards understanding and predicting decadal variability in the North Atlantic

    Beyond the seasonal horizon, the ocean becomes the dominant source of memory in the climate system. Owing to the more slowly evolving dynamics of the ocean than that of the atmosphere, modes of variability are possible on decadal to even centennial time scales. Such long time scale modes of variability are thought to exist in the Atlantic and the Pacific basins, referred to as the Atlantic Multi-decadal Variability (AMV, sometimes referred to as the Atlantic Multi-decadal Oscillation (AMO)) and Pacific Decadal Oscillation, respectively. While both of these modes are identified primarily by their expression in SST observations, they are often thought to be mechanistically driven by sub-surface ocean circulations. Here, I shall focus on discussing the better studied AMV.

    In the Atlantic basin, sufficient observations of SST exist back into the nineteenth century to be able to characterize multi-decadal variability. This is illustrated in figure 12 where clearly multi-decadal variability is observed in North Atlantic temperatures. Regressing the AMV index against maps of surface temperature reveals that the North Atlantic is almost wholly impacted. The AMV has been associated with many climate impacts, including drought in the African Sahel region (e.g. [32]) and the frequency of Atlantic hurricanes [33]. There are also believed to be extra-tropical climate impacts, such as influence on European summer temperatures and precipitation [34], as shown in figure 13.

    Figure 12.

    Figure 12. (a) The AMV index, as calculated from the detrended area-weighted SST in the North Atlantic. The time series is smoothed by a Chebyshev filter with a half-power at 13.3 years. (b) The regression of the AMV index onto the annual surface temperatures from HadCRUTv. Crown Copyright, also published in [31]. (Online version in colour.)

    Figure 13.

    Figure 13. Composite differences between years of positive AMV (1931–1960, 1996–2009) and negative AMV (1964–1993) are shown for precipitation over the Europe region. The change is expressed as a percentage of the 1901–2009 climatological mean. The warm North Atlantic temperatures (positive AMV) are associated with wet summers over northern Europe. Adapted from [34], permission from Macmillan Publishers Ltd, copyright (2012). (Online version in colour.)

    (a) What can we learn from free-running climate models about the driversof Atlantic multi-decadal variability?

    Free-running climate models, those run with no interannually varying external forcings, can be used to explore if AMV is an emergent feature of the Atlantic and, if so, how this may operate. The majority of climate models do indeed exhibit some kind of AMV and this is found to be driven by changes in the Atlantic meridional overturning circulation (AMOC) [31,35]. This is illustrated in figure 14 for the HadCM3 climate model, where a warm (positive) phase of the AMV is linked with an increase in the strength of the AMOC. However, models disagree over the period of AMV [36]. Detailed analyses of some models have revealed mechanisms to explain the characteristic time scales of variability. For example, [37,38] find a tropical precipitation–salinity advection feedback leading to a characteristic, near centennial scale, variability. Some agreement between models is also shown in a recent multi-model study [39], where the AMOC appears to lag (by 2–6 years) positive density anomalies in the Labrador Sea, and the AMOC appears to lead (by 1–5 years) an increase North Atlantic SST.

    Figure 14.

    Figure 14. Long model simulations (from the HadCM3 climate model) with no external forcings show decadal time scale fluctuations in surface temperature (a) which are associated with anomalous AMOC strength (b). Anomalously strong AMOC leads to warm conditions in the North Atlantic—positive AMV. Crown Copyright, also published in a modified form in [31]. (Online version in colour.)

    Given the presence of AMV in model control runs, and the suggestion that it is driven by the ocean dynamics of the AMOC, we can ask ourselves whether this Atlantic variability is predictable. Furthermore, what ocean observations would be required to initialize it? Owing to the relative paucity of ocean sub-surface observations, we again turn to models to try and answer these questions. Early studies used the so-called ‘perfect model’ approach. Here, a large ensemble is created by adding very small random perturbations onto the base model initial conditions and then assessing how well the ensemble members (and/or the ensemble mean) track the original, unperturbed, model evolution. Such a multi-model study found that the AMOC was essentially predictable several years to decades ahead [40]; an example is shown in figure 15a for the HadCM3 model [41]. Such experiments are highly idealized however, as the model has instantaneous and spatially complete information about the model state in all variables. For example, the model has full knowledge of ocean currents and winds, as well as full-depth information about the state of the ocean.

    Figure 15.

    Figure 15. Idealized model experiments to show the predictability of the AMOC when an ensemble is created with knowledge of: (a) all variables instantaneously, (b) monthly mean ocean T & S data globally in the top 2000 m and atmospheric information (6 hourly surface pressure, three-dimensional u and v winds and potential temperature) and finally (c) is the same as (b) but without any atmospheric initialization. The black line shows the evolution of the original control run (the ‘truth’) and different coloured thick lines represent the ensemble mean of the different hindcast start dates and the thin lines show the 90% confidence interval assessed from the spread of the ensemble members. Note that skill in predicting the AMOC evolution, as assessed by the RMSE statistic, is similar between all three different experiments. Adapted from [41]. (Online version in colour.)

    In order to better simulate the observational data that are currently available to initialize decadal climate predictions, more realistic model simulations have been performed [41]. For example, taking monthly mean observations of only temperature and salinity in the top 2000 m of the ocean (similar to the Argo array sampling frequency and depth range) was found to be sufficient to produce skilful forecasts of the AMOC (figure 15b). Initial conditions in the atmosphere were found to be less important on the decadal forecast time scale, at least in that model (figure 15c).

    The experiments examined so far do not tell us what the key locations are for observations in order to give us the predictability in the AMOC and hence associated climate impacts. This question was the subject of further idealized model experiments in [42] where the impact of removing data in each of the following three regions was assessed in turn: (i) the North Atlantic subpolar gyre (SPG), (ii) the tropical Atlantic, and (iii) the tropical Pacific (figure 16a). In these experiments, the AMOC was assessed at a latitude of 26 N (the latitude of the RAPID array—discussed later), which is significantly south of the North Atlantic SPG region and in fact on the edge of the tropical Atlantic region. Figure 16b shows that sustaining AMOC forecast skill is dependent on initializing the North Atlantic SPG region. This is a nice illustration of the widely held conceptual model of the AMOC (e.g. [43]) as being driven by density changes in the high-latitude North Atlantic SPG region. In terms of surface climate impacts, this paper also identified the North Atlantic SPG region as being key to initialize in order to obtain skill in predicting the multi-annual frequency of Atlantic tropical storms.

    Figure 16.

    Figure 16. Testing the impact of initializing different ocean regions on predictability. (a) The three regions consecutively withheld, the tropical Pacific (NoTROPPAC), the tropical Atlantic (NoTROPAT) and the North Atlantic sub-polar gyre region (NoNAT). (b) The skill in predicting the AMOC drops to the level of mere persistence (the skill obtained by using the previous 5-year mean as a forecast) when the North Atlantic sub-polar gyre is not initialized (adapted from [42]). (Online version in colour.)

    (b) Initialized predictions

    Idealized model experiments are useful tools for understanding mechanisms of variability, assessing potential predictability and suggesting which are the key ocean observations. However, such studies are always limited by the fidelity of the climate model(s) being used and also by the fact that the real world has both internal and external forcings (e.g. volcanoes or anthropogenic industrial emissions).

    Real-world decadal predictions have emerged in the last few years, with one of the first being the Met Office Decadal Prediction System (DePreSys) [44]. Such predictions use a climate model that has been initialized from a best estimate of the state of the ocean and atmosphere and then run forwards in time with projected changes in the anthropogenic emissions (e.g. greenhouse gases (GHGs) and aerosols) included. Given that the prediction time scale of interest is generally 2–20 years, hindcasts need to be made over half a century or more in order to assess their skill (typically 1960–present). This places very tough demands on the historical ocean observing systems.

    As discussed above, the Argo observations of T & S down to 2000 m are thought to be well suited for use in initialized predictions and hence it is essential that the Argo array (or equivalent) should be sustained into the future. Prior to the Argo array, regular global observations of sub-surface T & S (and salinity in particular) are more sparse. This is shown in figure 17, where the profiles used in the EN4 dataset [45] are plotted as a function of time. The majority of observations prior to the Argo array consisted of temperature alone. This simple figure shows only the number of global profiles per year and belies the strong spatial inhomogeneity of those observations that are available. For example, there is a very strong bias towards the Northern Hemisphere for most of the observational record.

    Figure 17.

    Figure 17. The number of ocean profiles available in the EN4 dataset [45] as a function of year. Note they are split into those profiles that were just temperature data (green), just salinity (blue—practically zero) and then those that were both temperature and salinity are stacked on top (red). As for the deployment of the Argo array, the number of joint T and S profiles has begun to exceed those of temperature alone. Kindly provided by Simon Good, Crown Copyright. (Online version in colour.)

    Despite the paucity of data prior to the Argo era, and particularly in the 1960s and 1970s, decadal prediction systems need initialization strategies to make the best of the observations available. The DePreSys uses an objective ocean analysis that is constructed initially using covariances from a global climate model [46] and then improved via an iterative approach that effectively creates hybrid observed–model covariances. These covariances are used to infill the regions of little or no data and also allow for the reconstruction of the sparsely observed salinity field via the covariance with the better observed temperatures. These covariances can potentially be of global reach, allowing the estimate of conditions in regions quite remote from observation locations. Of course, the drawback of this technique is the initial reliance on the model covariances and also the assumption of stationarity in covariance between points over the observed period. Many other decadal prediction systems use ocean analysis products created via the use of some kind of local covariance scheme. Typically, the length scales used for the influence of any particular observation are quite small in local covariance schemes. In the recent, data-rich period, this is not likely a problem. However, in data-sparse periods (e.g. the 1960s), then the model is too often relaxed to some background climatological state. Other systems using just SST and salinity have also been investigated [47], further motivating the sustained observations of surface salinity.

    Still other systems do away with ocean observations altogether and just force the ocean with observed atmospheric fluxes [48]. The essential idea behind this technique is that the ocean is ultimately a slave to the atmosphere and that the ocean circulation can be quickly established using surface fluxes alone. This novel approach circumvents the changes in ocean observing systems, although of course it relies heavily on accurate atmospheric reanalyses and a faithful response by the model ocean. However, it might be expected that the ocean is still responding to atmospheric anomalies many decades, or even centuries earlier and hence the use of actual ocean observations is likely essential. Fully coupled data assimilation systems are also being developed that attempt to provide a single-state estimate for the ocean–atmosphere coupled system. One such system [49], tested in a perfect model scenario, raised the possibility that an observing system such as the Argo array is capable of reconstructing the AMOC (similar to the results shown in figure 15).

    (c) Understanding North Atlantic variability

    Since 2007, the field of decadal prediction (sometimes referred to as ‘near-term prediction’) has grown rapidly and chapter 11 of IPCC AR5 is dedicated to it [50]. In agreement with many previous studies, IPCC AR5 finds SST in the North Atlantic to be one of the more skilfully predicted regions in the hindcasts. However, the source of this predictability is not entirely obvious.

    Before I examine the progress in understanding the role of North Atlantic ocean dynamics it is worth briefly mentioning another potential source of predictability for initialized decadal predictions, that of external forcings. There is mounting evidence that external forcings may strongly modulate the AMV and possibly the AMOC. For example, it has been shown in long millennial model simulations that volcanic forcings may strongly modulate the AMOC [51]. In historical climate simulations of the twentieth century (without assimilation of observations), anthropogenic aerosols have also been implicated as a prime driver of AMV [52] using the Met Office HadGEM2ES climate model. These simulations included a more complete representation of aerosol–cloud physics and simulate a strong impact of changes in cloud brightness over the North Atlantic as a result of time-evolving anthropogenic aerosol emissions from North America and Europe. The same model simulations also show that anthropogenic aerosols may have driven multi-decadal variability in the frequency of Atlantic tropical storms [53].

    The result from initialized decadal climate predictions (such as those presented in IPCC AR5) that the North Atlantic is a key region benefiting from initialized predictions used models with a relatively poor representation of aerosol–cloud interactions. It is possible therefore that a good proportion of the apparent benefit due to initialization has actually come about by correcting the models' erroneously weak response to the aerosol forcing, rather than by correctly initializing natural internal variability. On the other hand, the sub-surface ocean in HadGEM2ES does not match the warming seen in the North Atlantic [54], implying that either the model response to aerosols is too strong, or perhaps that the model mixes too much heat into the ocean interior. Furthermore, some tropical Atlantic proxy records for the last 500 years do not seem to support a strong internal AMO mode [55]. Hence, the fraction of Atlantic variability forced externally versus internal ocean dynamics is still an important research topic. Nevertheless, sustained ocean observations are key to improving our understanding of these competing mechanisms.

    (d) Predictions of ocean dynamics

    The North Atlantic SPG region was observed to warm strongly in the mid-1990s and in doing so transitioned to a positive phase of AMV. The cause of the warming in the Atlantic SPG, and the skill in predicting this warming, has been the subject of a number of studies. They suggest that during this period, the AMOC was initialized with anomalously strong northward ocean heat transport [56]. Furthermore, the initialized predictions were able to propagate this signal into the forecast period, i.e. that the SPG rapid warming event was at least partly dynamically predictable [57,58].

    On longer time scales (from the 1960s to present), a common signal for AMOC evolution emerges from ocean analyses (climate models driven with observed ocean data) as highlighted by Pohlmann et al. [59] and reproduced as the cyan line in figure 18a. Also plotted in figure 18a, as the green line, is the average AMOC analysis produced by three different versions of the Met Office DePreSys. Together, these analyses suggest a similar evolution for the AMOC, being flat or declining in the 1960s, an increase in strength up to the mid-1990s and then falling since then. This evolution fits well with the detailed analysis of the 1990s warming event [57,58]. Perhaps a more observationally based metric for the AMOC evolution can be reproduced by calculating the density gradient across the Atlantic basin (shading in figure 18a), here calculated using the boxes in figure 18b. This confirms that the model AMOC is basically reproducing a geostrophic flow responding to the horizontal density (or pressure) gradient. The map of density trends since the 1990s AMOC peak (so from 1995 to 2013) are plotted in figure 18b. They reveal a strong reduction in density in the deep western boundary current and an increase in density on the east side of the basin, consistent with a weakening AMOC. A similar density fingerprint is also seen in multi-model simulations when density is regressed against AMOC on decadal time scales [39].

    Figure 18.

    Figure 18. (a) An east–west index for density (calculated between 1000 and 3000 m depths, for the boxes overlaid in (b)) using data from the Met Office DePreSys ocean analysis [46]. Also plotted in (a) is the AMOC (green) produced by the DePreSys model when initialized with this analysis, together with a multi-model AMOC produced by Pohlmann et al. [59] (cyan). The inset box in (a) shows data from the RAPID-MOCHA array calculated annually from 2004 to 2012. (b) The trend in the 1000–3000 m density field between 1995 and 2013 from the DePreSys ocean analysis [46]. Adapted from [60]. Crown Copyright. (Online version in colour.)

    (e) The first continual observational-based estimates of the Atlantic meridional overturning circulation

    Using the above density index as a proxy for the AMOC is potentially a good way of trying to check on the realism of the model-driven AMOC reanalysis. However, we are still relying on the model dynamics to respond to the density gradients in a realistic manner. Continuous direct measurements of the AMOC strength became available in 2004 thanks to the RAPID-MOCHA array [61,62]. This provides us with the first estimate of the strength of the AMOC and the seasonal cycle. In the first 5 years of operation (2004–2008), the AMOC was fairly constant at 18.7±2.7 Sv with a seasonal cycle strength estimated to be 6.7 Sv [63]. The amplitude of the seasonal cycle, and short time scale variability more generally, was significantly larger than previously anticipated. Aliasing of this seasonal cycle may explain why previous ship-based snapshots of the AMOC [64] could give such large apparent decadal trends in the AMOC [63]. The AMOC mean and seasonal values provided by the RAPID-MOCHA array have given the climate modelling community a target to aim for when developing the current generation of climate models.

    After 2008, the RAPID-MOCHA observations started to become even more exciting as additional interannual variability, beyond the seasonal cycle, was seen. In 2009–2010, there was a 5.7 Sv decrease in the annual mean AMOC (figure 19, black line) with December 2009 actually recording a negative AMOC transport in the monthly mean [66]. The Ekman and Upper Mid-Ocean components of the AMOC show significant reductions. This event was associated with a strong negative phase of the NAO, which clearly explains the reduction in the Ekman component of the AMOC (1.7 Sv) [66]. However, the 2.7 Sv decrease in the Upper Mid-Ocean transport is harder to explain. This reduction in the northward heat transport persisted through to the next year and likely drove the cooling of the sub-tropical Atlantic [67]. This may have contributed to the anomalously cold northern European winter the following year [68].

    Figure 19.

    Figure 19. The FOAM ocean forecasting model (driven by ocean data and atmospheric forcings, red) is able to reproduce the RAPID-MOCHA observations of the AMOC at 26° N (black). When the model is driven by atmospheric fluxes alone (blue, no ocean data assimilation), it is still able to reproduce much of the time variability of the AMOC including the 2009–2010 event. Adapted with permission from [65]. (Online version in colour.)

    Data assimilating model simulations, such as the Met Office FOAM system, that are driven by the observed ocean data can reproduce the mean strength and depth of the AMOC, including the reduction associated with the 2009 event [65]. This is reproduced in figure 19. As RAPID observations are not currently assimilated by the FOAM system (although studies have considered this [69]), they can be used for independent evaluation of the initialized model state—analogous to the use of surface drifters to assess the initialization of the ocean mesoscale, as discussed in §2.

    More surprising is the fact that when only atmospheric boundary conditions are prescribed (i.e. no ocean data are assimilated), the FOAM system still captures the majority of the 30% reduction in the 2009–2010 event (blue lines in figure 19). This result does not preclude an important role for the ocean in constraining the evolution of the atmosphere during this time. However, it does indicate that constraining surface momentum and buoyancy fluxes is sufficient to explain the seasonal and interannual variability of the AMOC at 26° N and indicates a limited role for initial-condition-dependent mesoscale activity in driving the observed event during 2009–2010 [65,70]. How representative this behaviour is of other large interannual AMOC variability will need to be examined using further observations (along with further modelling studies). It may, however, be consistent with studies that suggested tropical Pacific SSTs as the source of the strong negative NAO anomaly in 2009–2010 [71].

    In addition to the 2009–2010 event, the extension of the RAPID time series appears to show a negative trend for the AMOC (as can be seen in the inset in figure 18a). The strength of the weakening of the AMOC is in approximate agreement with the model-driven AMOC (also plotted in figure 18a), at about 0.5 Sv per year. The strength and significance of the observed trends are somewhat sensitive to the analysis technique used; however they seem to be robust [72] even when the exceptionally negative year 2009–2010 is removed. Given the strength of the trend, approximately five times that simulated by models in response to GHGs, this is likely to be associated with multi-decadal variability and not a long-term trend. This is supported by a recent examination of decadal AMOC trends in free-running model simulations [73]. The observed rate of decline was found to be inside the range of simulated model internal variability. However, compared with RAPID, climate models underestimate interannual AMOC variability and when this was accounted for then the observed decline is even less significant. Given this, a continuation of the observed AMOC trend would need to be observed for at least another 8–10 years in order to robustly detect any influence of man-made, climate-change effects [73]. Thanks to newly secured funding from the NERC, the RAPID array will be sustained until at least 2020 and it will be most interesting to see how this negative trend develops.

    (f) Overturning in the Subpolar North Atlantic Program: understanding the sub-polar gyre and mechanisms of Atlantic meridional overturning circulation variability

    While there is no doubt that RAPID-MOCHA broke new ground in 2004 when it started continuously monitoring the 26° N line, it is only one latitude in the Atlantic. There are many open and unanswered questions surrounding how representative 26° N is of the entire North Atlantic circulation. It is difficult to estimate the AMOC without full-depth profiles (including those near to the continental slopes), although attempts have been made to reconstruct the upper limb of the AMOC from Argo and altimeter data [74] at 41° N. A comparison of the 41° N time series with that from the RAPID array revealed that the non-Ekman components of the AMOC seasonal cycle were 180° out of phase in the observations for the two sections, but a similar result was not found in the model used in this study [75]. The extent to which the local gyre dynamics at different latitudes contributes to the AMOC variability is also an open question [76].

    To address these, and other important questions, the Overturning in the Subpolar North Atlantic Program (OSNAP) has just been funded as a joint venture with strong UK involvement. The array will connect Canada to Scotland via Greenland and hence measure transports from the Labrador and Greenland, Iceland and Norwegian seas (figure 20). This separation is of particular interest to ascertain the relative contribution of these regions to deep water formation, which is known to vary strongly among models. Also of interest will be the direct comparison with the RAPID array and observing on what time scale variability becomes coherent between the two arrays. Ultimately, modelling studies will be needed to probe what external forcing factors are driving the observed variability (e.g. [65]) and to form a picture of the whole Atlantic AMOC constrained by observations from mooring arrays.

    Figure 20.

    Figure 20. A schematic of the AMOC is shown. Overlaid are the positions of three ocean monitoring arrays: the RAPID-MOCHA array at 26° N, the OSNAP array in the North Atlantic SPG and SAMOC at 30° S. Adapted from [77], permission from Macmillan Publishers Ltd, copyright (2013). (Online version in colour.)

    (g) South Atlantic meridional overturning circulation: monitoring for abrupt change?

    A third array is currently under development in the South Atlantic, called the South AMOC (SAMOC; http://www.aoml.noaa.gov/phod/SAMOC_international/), and should be coming into operation in 2013–2014. SAMOC is focused on understanding the AMOC in the South Atlantic and in particular the transformation of water masses as they are exchanged with the other global oceans. Similar to the other two arrays, SAMOC will measure to the full depth of the ocean along a line near 34.5° S using geostrophic-style mooring arrays and current meters on the continental slopes. SAMOC will hopefully provide new insight into the freshwater budget of the Atlantic to answer profound questions, such as whether the Atlantic is really net evaporative.

    A further use of the data obtained from the SAMOC array, particularly if the observations are sustained for several years or decades, may be to potentially inform about the stability characteristics of the AMOC. Theory suggests that under the same forcing, the AMOC can possess two stable states, one with a vigorous overturning akin to the present day, and one with little or no circulation [78]. The results of several studies using simple and intermediate-complexity numerical models support this behaviour (e.g. [79]). This bistablity allows the possibility of abrupt transitions between the steady states. A lightening of surface waters in the high-latitude North Atlantic (through warming, increased precipitation and melting of land ice) associated with increased GHG emissions is likely to inhibit deep convection and weaken the AMOC over the twenty-first century. This is indeed seen in most climate models, with the IPCC AR4 models producing an average of 25% weakening of the AMOC by 2100 [80], but with very large differences between models and a collapse of the AMOC was considered very unlikely.

    Given an overturning in the ‘on’ state, it has been suggested that a simple indicator may exist that could show whether or not there exists a stable ‘off’ state for the same external forcing [81]. This is important because it would tell us whether an anthropogenically forced AMOC collapse could be remedied with a reduction in GHG concentrations, or whether it would be effectively irreversible. The indicator proposed is the sign of the meridional freshwater transport by the overturning circulation itself across the southern boundary of the Atlantic, Fov [81]. The sign of Fov tells us whether the AMOC imports or exports salt into or out of the Atlantic basin. If the AMOC imports salt into the Atlantic, then it acts to promote the high-salinity conditions necessary for its existence, and is therefore self-sustaining. If it were to collapse, then its absence would prevent the conditions for its recovery, and the ‘off’ state would be stable. The potential for this type of indicator (or a similar one based on the freshwater divergence between the northern and southern limits of the Atlantic) has been supported by other theoretical studies and numerical modelling experiments [82,83]. Results using a fully coupled atmosphere–ocean general circulation model suggest that this type of indicator may show some potential [84], but also that in a coupled system other feedbacks exist that complicate the picture.

    Present estimates for the real world, both from observations [82] and model ocean reanalyses driven by observations [84], suggest that the Fov is negative, i.e. that the Atlantic may be bistable. However, most current climate models have a positive Fov and hence a stable Atlantic [85]. This is likely linked, in some models at least, to model biases of surface freshwater flux in the South Atlantic [86]. Using SAMOC array transports (along with salinity data supplemented by Argo data), a long-term robust estimate (free from seasonal or interannual variability) for the sign of Fov might be established. This information would be valuable for climate model development, because an accurate representation of the freshwater transports across the southern limit of the Atlantic is likely to be crucial for reliable simulations of AMOC behaviour under climate forcing. Additionally, and although requiring further research, there is a tantilizing suggestion that the transient behaviour of Fov might inform about the proximity of the system to a collapse of the AMOC [84], prompting the suggestion that data from the SAMOC array could be used as a potential ‘health check’ for the stability of the AMOC.

    5. The role of the ocean in the current global warming ‘hiatus’

    Explaining the observed slowdown of global warming in the last 15 years, the so-called ‘hiatus’, is increasing in priority as public interest grows and the pause continues. For example, chapter 9 of IPCC AR5 [87] provided a box (Box 9.2) dedicated to the discussion of the hiatus. After the peak in global temperatures following the particularly strong 1998 El Niño event, the increasing global temperatures appear to have stalled as shown in the top panel of figure 21. The decadal trends in global temperatures are plotted in the bottom panel of figure 21 and reduced to zero for the latest decade. This hiatus period comes in spite of the continual increase in global atmospheric CO2 levels as a result of further global industrialization. Of course, global surface temperatures are not the only metric for climate change. One of the more persuasive is the observational record of sea-level rise [88] which is the third panel in figure 21. From 1993 onwards, space-based altimetry observations of sea level are available, with the accuracy of such data increasing thanks to new satellites and instruments (e.g. JASON-2). There is no evidence for a hiatus in sea-level rise which has continued over the last 15 years.

    Figure 21.

    Figure 21. Observations relevant to the hiatus period. The top panel s hows the temperatures over the entire globe (black, HadCRUT4), over land only (red, CRUTEM4) and over the ocean (blue, HadSST3). The next panel shows the upper 800 m ocean heat content anomaly from [46]. The third panel shows the sea-level estimates from [88]. The fourth panel shows the decadal trends in the quantities in the top two panels. The dark grey vertical bar shows the onset of the hiatus period and the light grey bar shows its continuation. Vertical lines show the times of large volcanic eruptions. Credit Doug Smith, from Met Office Hadley Centre report ‘Paper 2: recent pause in global warming’ (http://www.metoffice.gov.uk/media/pdf/q/0/Paper2_recent_pause_in_global_warming.PDF), Crown Copyright. (Online version in colour.)

    To explain the hiatus in global surface temperatures requires either: (i) a change in the total energy received by the Earth or (ii) an internal rearrangement of that energy. Examining the first hypothesis, the energy received at the surface of the Earth would need to be reduced by an estimated 0.6 W m−2 [89,90]. This is in order to both completely offset the calculated 0.35 Wm−2/decade [91] rate of increase in energy of the Earth system implied by the observed increase in GHG concentrations and to offset the slow oceanic adjustment to the previous twentieth century global warming. This forcing could in theory be provided by a reduction in GHGs (which has not been observed), in the energy received from the Sun (estimated to be less than 0.2 W m−2) or in stratospheric water vapour (about 0.1 W m−2 [92]). Conversely, the forcing may be provided by an increase in volcanic stratospheric aerosol (only small volcanic eruptions [93]) or anthropogenic aerosol emissions (not thought to have increased significantly during the last 15 years [94]). These have all being investigated (see the 2013 Met Office paper on the pause: http://www.metoffice.gov.uk/media/pdf/q/0/Paper2_recent_pause_in_global_warming.PDF for an informal review of our current understanding), but even if all these factors worked in concert, and to their full potential, they would only account for approximately 0.3 W m−2. This is half of the 0.6 Wm−2 thought to be needed in order to abruptly stop current global warming and so suggests that a rearrangement of heat in the Earth system would also be needed to explain the hiatus period.

    On the interannual to decadal time scale, the ocean is the primary component of the Earth system able to store and redistribute heat. Hence, an increase in sub-surface ocean heat uptake over the last 15 years would be expected. The second panel in figure 21 shows the time series of global upper 800 m ocean heat content anomaly. There is a sharp increase in heat content over a few years, starting in the late 1990s (dark grey region in figure 21). However, since about 2004, the top 800 m ocean heat content has been relatively flat—mirroring the surface temperatures (light shading in figure 21). It is worth noting, however, that assessments of ocean heat uptake over the last 15 years are still subject to considerable uncertainty and are likely to be sensitive to the techniques used to infill the data (where no observations exist).

    If the top 800 m of the ocean has not continued to warm, then this suggests heat may have been sequestered down into the deeper ocean. There is some evidence for this from ocean reanalysis (models driven by observed ocean data), where an estimated 30% of the warming has occurred beneath 700 m [90,95]. However, the mechanisms governing the relatively sudden uptake of heat by the deep ocean largely remain unexplained. Below the 2000 m sampling depth of the Argo array, our knowledge of the state of the ocean remains limited mainly to the data returned by specific research cruises. Therefore, below 2000 m ocean reanalyses largely rely on the model dynamics to propagate the sparse observational data throughout the global oceans.

    We can use free-running climate models to estimate the expected frequency of decades with little or no surface warming due to internal variability. These generally indicate that internal variability is sufficiently strong to cause a decade, or possibly longer, of no surface warming [96,97]. It is possible to go further than this and use models to see how representative trends in surface temperature are of trends in the total energy in the climate system. The advantage of using such model simulations is that they can be run for many centuries and even many millennia, allowing robust statistics to be derived. Using simulations from three Met Office Hadley Centre climate models, decadal trends in SST were found to be a poor constraint on the total planetary energy imbalance [98,99]. Put another way, a flat trend in SST can easily mask the signature of global warming for at least a decade. By contrast, ocean heat content is a much better constraint on the true planetary energy imbalance and to achieve greater accuracy it is necessary either to observe deeper in the ocean, or for a longer period of time. Even the relatively deep layers in the ocean (those between 2000 and 4000 m) are found to contribute significantly to knowledge of the total energy imbalance [98], motivating the need for deeper ocean observations. This indicates that the models include mechanisms for exchanging heat from the surface to these deep layers. Layers below 4000 m were not found to contribute significantly in these models. However, recent observational estimates from sparse ship observations suggest that even the heat content change below 4000 m could be significant [100]. This probably points to short-comings in the model representation of bottom waters, which is an ongoing problem in global ocean models [101] and linked to the difficulties of representing deep water formation in the Southern Ocean.

    In light of this evidence, there appears to be a compelling need for observations of the deep ocean below the current 2000 m operating depth of the Argo floats (so-called ‘Deep-Argo’). Such observations are likely to be particularly beneficial in those regions where waters are exchanged between the deep ocean and the near surface, such as the North Atlantic SPG and the Southern Ocean. Given such observations, we can hope to better monitor the flow and storage of heat in the Earth system and so to understand the surface climate change. Deep-Argo has been much discussed; however the manufacture of floats capable of withstanding the pressures found at these deep ocean depths is an engineering challenge. Some prototype floats are being deployed by participants in the UK NERC funded DEEP-C project (http://www.met.reading.ac.uk/∼sgs02rpa/research/DEEP-C.html). The DEEP-C project aims, by further work using models and novel ways to analyse ocean observations, to make progress in explaining the mechanisms that have driven the hiatus period and to assess where the excess energy is accumulating in the climate system.

    The pause in surface warming is a focus of much current research and, since giving my Challenger Society Prospectus 2013 presentation (in September 2013), a couple of papers have been published that highlight the role of cooling in the tropical Pacific ocean as a driver of the hiatus in global warming. Model simulations with prescribed observed surface temperature variability over part of the east tropical Pacific, an area of covering only 8% of the Earth's surface, were found to reproduce much of the 1970–2012 observed global surface temperature trends (including the hiatus) and many recent climate impacts [102]. Further work has shown that an anomalous intensification of the Pacific trade winds appears to be responsible for persisting the cool east tropical Pacific temperatures and increasing sub-surface ocean heat uptake in this region [103]. Interestingly, the observed trade wind intensification over the past two decades is both unprecedented in the observational record and not simulated by the current generation of climate models. This raises the question of what is driving this anomalous tropical Pacific atmospheric circulation? Is it an unusual phase of internal variability or, perhaps more likely, a response to external forcings? Either way, further work is needed to understand why climate model simulations do not appear to be able to reproduce the magnitude of the recent observations. Clearly, there is a need for sustained tropical Pacific ocean observations of surface and sub-surface variables to improve our understanding of the physical mechanisms.

    6. Summary

    I have briefly outlined some of the ways that ocean observations are used in climate modelling and prediction. It is interesting that most types of ocean observing platforms are useful to all time scales of prediction, even though the observations may be used in quite different ways. However, different time scales can potentially exert opposing pressures on the future priorities of the ocean observing system and this needs to be carefully managed. For example, in general, the operational ocean forecasting community would rather have higher temporal sampling near the ocean surface, while the focus of the climate monitoring and prediction activities is more aimed towards long-term accuracy of the measurements and with more of a focus on the deeper ocean layers.

    Some dedicated ocean observing campaigns, e.g. the OSMOSIS project, are key to improving the fidelity of ocean models. Such intense campaigns are by their very nature of relatively short duration and hence would not be classed as ‘sustained observations’. However, it is important that the effort and resources for such campaigns are sustained into the future so that we can continue to improve ocean models and ocean–atmosphere interactions, with the aim of making more reliable climate predictions across time scales.

    The methods and techniques used to initialize near-term climate predictions are likely to continue to improve. However, the changing nature of the ocean observing networks is also going to prove a challenge. We need to ensure that in the decades that follow, we maintain a consistent backbone of nearly homogeneous global temperature and salinity sub-surface ocean observations. The Argo array is already well designed to do this task, especially if it is logically extended to sample the full ocean depth. In this way, future generations of climate models will have a relatively rich dataset with which to analyse climate variability and predictability.

    Future changes in the ocean observing system need to be strongly scientifically motivated and ideally well linked to wider benefits to society. Assessing the value of different ocean observing platforms is a highly subjective task and depends upon the variables and time scales of interest. However, OSEs can at least be used to objectively assess the impact of different observing systems on the skill of model predictions for a particular time scale and/or region. The results of the model OSE exercise on the ocean forecasting time scales [4] highlight how complementary the present observing systems are. More OSEs on seasonal and decadal time scales would be useful to similarly assess the relative value of different ocean observing platforms. This could potentially help the community to find the necessary language to express the value of ocean observations to society. Transitioning from the research funding used to develop new ocean observing capability to a more sustained and secure funding source for the continued operational phase is a particular challenge. This is certainly true in the UK, with two recent examples being the UK contribution to the Argo array and the extension of funding for the RAPID-MOCHA array. Funding issues are not restricted to the UK however, as the example of the recent degradation of the TAO/TRITON array highlights.

    Thanks to the pioneering work of the RAPID-MOCHA array we are obtaining the first long-term time series of observations of ocean transports and these are being used to inform model development. The OSNAP and SAMOC arrays are being deployed with the lessons gained from the RAPID array, for example the need for a high data sampling rate to avoid aliasing high-frequency variability. Both of these arrays will test the latitudinal coherence of the AMOC which will again be used to test model fidelity. While OSNAP will hopefully help to better determine the location and variability of deep water formation, SAMOC will explore the exchange of Atlantic waters with the other global basins and perhaps lead to information about the stability of the AMOC. However, I suspect that it will be as much the unexpected results that these new arrays are likely to bring that will progress our understanding of the real ocean. Together, these will inform new developments in climate models that will ultimately lead to more accurate and reliable climate predictions of societal value.

    Thanks to the ever-increasing power of modern computers, ocean models are becoming more capable of representing a wider range of the physical processes that are observed in the real world. As such they are becoming more accurate at being able to effectively fill in the gaps between ocean observation locations in model-driven ocean reanalyses. We have also seen that models can be used to help inform ocean observing strategies, such as the potential benefit of a deep-Argo observing array to help constrain the total planet energy budget.

    These points demonstrate the symbiosis that currently exists between ocean models and observations. In the discussion session following my presentation, someone asked whether, if pushed to choose one, I would invest more money in ocean observations or ocean modelling. I hope that in this paper I have already conveyed my opinion that this is a false choice. We need to sustain progress in both our ocean observational and modelling capabilities and I would argue that this is the most efficient way of making progress in both fields.

    Acknowledgements

    I thank the Challenger Society and the UK Scientific Committee on Oceanic Research for giving me the opportunity to contribute to their 2013 Prospectus. I thank Harry Bryden (guest editor), Phil Woodworth and two anonymous reviewers for providing valuable comments on the manuscript. I also thank the following for providing advice, comments, plots, proof reading or encouragement: Dan Lea, Chris Roberts, Doug Smith, Lesley Allison, Leon Hermanson, Niall Robinson, Jeff Knight, Jennie Waters, Pat Hyder, Matt Martin, Matt Palmer, Tim Johns, Sarah Ineson and Adam Scaife. Any remaining errors, or holes in understanding, are solely my own fault.

    Funding statement

    The author was supported by the Joint DECC/Defra Met Office Hadley Centre Climate Programme (GA01101) and the EU FP7 SPECS project.

    Footnotes

    One contribution of 8 to a Theme Issue ‘A prospectus for UK marine sustained observations’.

    References