COSMO goes hybrid: Interview with Oliver Fuhrer of MeteoSwiss
“This decision is a huge success for the HP2C projects”
The COSMO model is used by the German Meteorological Service, MeteoSwiss and other institutions as a basis for their daily weather forecasts. In the past three years, researchers from the Center for Climate Systems Modeling and MeteoSwiss have worked for the High Performance and High Productivity Computing (HP2C) initiative to revise and refine the model’s code and the algorithms used in it. Their goal was to make the software more efficient for regional weather and climate simulations and to adapt it to new computer architectures based on graphics processors (GPUs). The new software was tested successfully on a computer with graphics processors and the newly extended “Piz Daint”, enabling to perform the simulations with greater efficiency with reduced energy consumption (see article of 11 September 2013). Convinced by these encouraging results, the COSMO Consortium decided at its annual meeting in early September to support the newly developed software. Oliver Fuhrer, a senior scientist at MeteoSwiss who was part of the team that developed the new climate and weather code, tells us more.
by Simone Ulmer
The Consortium for Small-scale Modeling (COSMO) is comprised of seven national weather services and has the goal of developing, improving and maintaining the weather and climate model COSMO. COSMO is the model used operationally for everyday weather forecasting at MeteoSwiss and for climate research in several groups at the Institute for Atmospheric and Climate Research (IAC ETH). The Steering Committee of the COSMO Consortium has decided to integrate all the developments which have been made within the two projects of the Swiss platform for High Performance and High Productivity Computing (HP2C) into the official version. Specifically, this means that a version that can be run on graphics processing units will be distributed to all users of the COSMO model.
What is special about graphics processing units?
Graphics processing units (GPUs) are specialised integrated circuits designed to efficiently generate computer graphics. Their highly parallel design makes them more efficient than general-purpose CPUs for certain algorithms, such as the algorithms typically used in weather and climate models for solving the equations governing the evolution of atmospheric motion.
What consequences does the Consortium’s decision have for your work and the HP2C projects?
This decision is a huge success for the HP2C projects and the whole team which has formed over the past three years. At the same time, it also means there is a lot of work ahead of us! The official model version has evolved considerably since the start of the HP2C projects and we now have to upgrade our developments to the most recent model version. Also, the procedure for introducing code into the official version is very strict and will require some code refactoring and a lot of testing. This work will be embedded in the so-called POMPA priority project of the COSMO Consortium.
Why is the Consortium adopting another new code?
The two HP2C projects have demonstrated three important points: Firstly, it is feasible to target GPU-based hardware while retaining a single source code for almost all of the COSMO code. Secondly, using GPU hardware is very attractive for accelerating simulation time and reducing the electric power required to run the computer executing the simulation. Thirdly, it is possible for domain scientists to develop and work with this new version of the COSMO model. These results convinced the Consortium of the importance and the potential of the developments.
Will the new code replace the old one?
The porting to GPUs has been done using two fundamentally different approaches. In the most performance-critical part (the dynamical core), we have completely re-written the existing Fortran code in C++ using a newly developed library. In a first step, this rewrite will be introduced as a new option and key developers can learn how to develop in the new framework. In the rest of the code, we have employed a technique based on compiler directives to port the code to GPUs. This requires the insertion of additional statements in the code which control the execution on the GPU. These are simple additions to the existing code and the traditional Fortran code can be retained.
What was the motivation behind all this work? After all, we already have a range of different codes worldwide that calculate the weather and climate for us.
Reliable weather predictions and climate projections are more important than ever for governmental bodies, the economic sector and the general public. The associated simulations rely critically on high-performance computing resources and on being able to leverage these resources efficiently. At the same time, power constraints are leading to fundamental architectural changes in current and future high-performance computing systems. Accelerators such as GPUs are becoming commonplace. For example, in November the new Piz Daint supercomputer at CSCS will come online and have the same number of CPUs as GPUs. Applications that can also be run on GPUs will have unprecedented compute power available.
What will happen next with the new developments?
The next steps are very clear. On the one hand, we have to update and consolidate the developments in order to integrate them into the official version of the COSMO model. On the other hand, we want to continue to develop and generalise the library which has emerged from the HP2C projects. A version of this library which is less COSMO-specific can potentially also be used by several other applications, such as other climate models or even codes used for seismic wave propagation in the Earth sciences. The latter developments have been submitted in the form of a project proposal to the Platform for Advanced Scientific Computing (PASC).
Will the next MeteoSwiss computer be a supercomputer with graphics processors?
MeteoSwiss is currently developing the next generation of forecasting models which will significantly increase the resolution and introduce ensemble forecasting, where many slightly perturbed forecasts are used to assess the uncertainty associated with the forecast. In terms of total size of the computational problem, this is an increase by one order of magnitude. In order to make this become reality within the electrical power limits, we will have to rely both on algorithmic improvements in the code as well as improvements on the hardware side. For the latter, GPUs are certainly a very attractive alternative to CPUs.
The two COSMO projects in High Performance and High Productivity Computing (HP2C)
How does climate change affect the Alps?
As our knowledge grows, climate models are becoming increasingly complex. At the same time, however, high-resolution models are important locally to improve our understanding of the effects of climate change on the Alpine regions. By revising or replacing the existing codes and algorithms, the aim of the project was to make the regional weather and climate model COSMO-CCLM used by ETH Zurich and MeteoSwiss more efficient and adapt it to new computer architectures.
COSMO-CCLM – Regional Climate and Weather Modelling on the Next Generation’s High-Performance Computers: Towards Cloud-Resolving Simulations; Isabelle Bey, Executive Director of C2SM (Center for Climate Systems Modeling) at ETH Zurich.
More accurate weather forecasts
The OPCODE Project was closely related to the COSMO-CCLM project and was primarily geared toward leveraging the results of the COSMO-CCLM project and apply them to operational weather forecasting. As a result of this project, the full operational forecasting suite of MeteoSwiss has been ported to a demonstration system based on graphics processing units (GPUs).
OPCODE-Operational COSMO Demonstrator; Oliver Fuhrer, MeteoSwiss
About C2SM
The Center for Climate Systems Modeling is a research centre based at ETH Zurich. It is a joint initiative between ETH Zurich, MeteoSwiss, Empa, and Agroscope Reckenholz-Tänikon (ART), with the main objective of improving our understanding of the Earth’s climate system and our capability to predict weather and climate. C2SM was founded in November 2008 and has been operational since March 2009.