Exascale computing refers to computing systems capable of calculating at least "1018 IEEE 754 Double Precision (64-bit) operations (multiplications and/or additions) per second (exaFLOPS)"; it is a measure of supercomputer performance.
Exascale computing is a significant achievement in computer engineering: primarily, it allows improved scientific applications and better prediction accuracy in domains such as weather forecasting, climate modeling and personalised medicine. Exascale also reaches the estimated processing power of the human brain at the neural level, a target of the Human Brain Project. There has been a race to be the first country to build an exascale computer, typically ranked in the TOP500 list.
Floating point operations per second (FLOPS) are one measure of computer performance. FLOPS can be recorded in different measures of precision, however the standard measure (used by the TOP500 supercomputer list) uses 64 bit (double-precision floating-point format) operations per second using the High Performance LINPACK (HPLinpack) benchmark.
Whilst a distributed computing system had broken the 1 exaFLOPS barrier before Frontier, the metric typically refers to single computing systems. Supercomputers had also previously broken the 1 exaFLOPS barrier using alternative precision measures; again these do not meet the criteria for exascale computing using the standard metric. It has been recognised that HPLinpack may not be a good general measure of supercomputer utility in real world application, however it is the common standard for performance measurement.
It has been recognized that enabling applications to fully exploit capabilities of exascale computing systems is not straightforward. Developing data-intensive applications over exascale platforms requires the availability of new and effective programming paradigms and runtime systems. The Folding@home project, the first to break this barrier, relied on a network of servers sending pieces of work to hundreds of thousands of clients using a client–server model network architecture.
The first petascale (1015 FLOPS) computer entered operation in 2008. At a supercomputing conference in 2009, Computerworld projected exascale implementation by 2018. In June 2014, the stagnation of the Top500 supercomputer list had observers question the possibility of exascale systems by 2020.
Although exascale computing was not achieved by 2018, in the same year the Summit OLCF-4 supercomputer performed 1.8×1018 calculations per second using an alternative metric whilst analysing genomic information. The team performing this won the Gordon Bell Prize at the 2018 ACM/IEEE Supercomputing Conference.
In June 2020 the Japanese supercomputer Fugaku achieved 1.42 exaFLOPS using the alternative HPL-AI benchmark.
In 2008, two United States of America governmental organisations within the US Department of Energy, the Office of Science and the National Nuclear Security Administration, provided funding to the Institute for Advanced Architectures for the development of an exascale supercomputer; Sandia National Laboratory and the Oak Ridge National Laboratory were also to collaborate on exascale designs. The technology was expected to be applied in various computation-intensive research areas, including basic research, engineering, earth science, biology, materials science, energy issues, and national security.
By 2012, the United States had allotted $126 million for exascale computing development.
In February 2013, the Intelligence Advanced Research Projects Activity started the Cryogenic Computer Complexity (C3) program, which envisions a new generation of superconducting supercomputers that operate at exascale speeds based on superconducting logic. In December 2014 it announced a multi-year contract with IBM, Raytheon BBN Technologies and Northrop Grumman to develop the technologies for the C3 program.
On 29 July 2015, Barack Obama signed an executive order creating a National Strategic Computing Initiative calling for the accelerated development of an exascale system and funding research into post-semiconductor computing. The Exascale Computing Project (ECP) hopes to build an exascale computer by 2021.
On 18 March 2019, the United States Department of Energy and Intel announced the first exaFLOPS supercomputer would be operational at Argonne National Laboratory by late 2022. The computer, named Aurora is to be delivered to Argonne by Intel and Cray (now Hewlett Packard Enterprise), and is expected to use Intel Xe GPGPUs alongside a future Xeon Scalable CPU, and cost US$600 Million.
On 7 May 2019, the U.S. Department of Energy announced a contract with Cray (now Hewlett Packard Enterprise) to build the Frontier supercomputer at Oak Ridge National Laboratory. Frontier is anticipated to be fully operational in 2022  and, with a performance of greater than 1.5 exaFLOPS, should then be the world's most powerful computer.
On 4 March 2020, the U.S. Department of Energy announced a contract with Hewlett Packard Enterprise and AMD to build the El Capitan supercomputer at a cost of US$600 million, to be installed at the Lawrence Livermore National Laboratory (LLNL). It is expected to be used primarily (but not exclusively) for nuclear weapons modeling. El Capitan was first announced in August 2019, when the DOE and LLNL revealed the purchase of a Shasta supercomputer from Cray. El Capitan will be operational in early 2023 and have a performance of 2 exaFLOPS. It will use AMD CPUs and GPUs, with 4 Radeon Instinct GPUs per EPYC Zen 4 CPU, to speed up artificial intelligence tasks. El Capitan should consume around 40 MW of electric power.
As of November 2021, the United States has three of the five fastest supercomputers in the world.
In Japan, in 2013, the RIKEN Advanced Institute for Computational Science began planning an exascale system for 2020, intended to consume less than 30 megawatts. In 2014, Fujitsu was awarded a contract by RIKEN to develop a next-generation supercomputer to succeed the K computer. The successor is called Fugaku, and aims to have a performance of at least 1 exaFLOPS, and be fully operational in 2021. In 2015, Fujitsu announced at the International Supercomputing Conference that this supercomputer would use processors implementing the ARMv8 architecture with extensions it was co-designing with ARM Limited. It was partially put into operation in June 2020 and achieved 1.42 exaFLOPS (fp16 with fp64 precision) in HPL-AI benchmark making it the first ever supercomputer that achieved 1 exaFLOPS. Named after Mount Fuji, Japan's tallest peak, Fugaku retained the No. 1 ranking on the Top 500 supercomputer calculation speed ranking announced on November 17, 2020, reaching a calculation speed of 442 quadrillion calculations per second, or 0.442 exaFLOPS.
As of June 2022, China had two of the Top Ten fastest supercomputers in the world. According to the national plan for the next generation of high performance computers and the head of the school of computing at the National University of Defense Technology (NUDT), China was supposed to develop an exascale computer during the 13th Five-Year-Plan period (2016–2020) which would enter service in the latter half of 2020. The government of Tianjin Binhai New Area, NUDT and the National Supercomputing Center in Tianjin are working on the project. After Tianhe-1 and Tianhe-2, the exascale successor is planned to be named Tianhe-3. As of 2023 China is reported to have two operational exascale computers; Tianhe-3 and Sunway OceanLight, with a third being built. Neither are on the Top500.
In 2011, several projects aiming at developing technologies and software for exascale computing were started in the European Union. The CRESTA project (Collaborative Research into Exascale Systemware, Tools and Applications), the DEEP project (Dynamical ExaScale Entry Platform), and the project Mont-Blanc. A major European project based on exascale transition is the MaX (Materials at the Exascale) project. The Energy oriented Centre of Excellence (EoCoE) exploits exascale technologies to support carbon-free energy research and applications.
In 2015, the Scalable, Energy-Efficient, Resilient and Transparent Software Adaptation (SERT) project, a major research project between the University of Manchester and the STFC Daresbury Laboratory in Cheshire, was awarded c. £1million from the United Kingdom's Engineering and Physical Sciences Research Council. The SERT project was due to start in March 2015. It will be funded by EPSRC under the Software for the Future II programme, and the project will partner with the Numerical Analysis Group (NAG), Cluster Vision and the Science and Technology Facilities Council (STFC).
On 28 September 2018, the European High-Performance Computing Joint Undertaking (EuroHPC JU) was formally established by the EU. The EuroHPC JU aims to build an exascale supercomputer by 2022/2023. The EuroHPC JU will be jointly funded by its public members with a budget of around €1 billion. The EU's financial contribution is €486 million.
In March 2023 the government of the United Kingdom announced it would invest ₤900 million in the development of an exascale computer.
In June 2017, Taiwan's National Center for High-Performance Computing initiated the effort towards designing and building the first Taiwanese exascale supercomputer by funding construction of a new intermediary supercomputer based on a full technology transfer from Fujitsu corporation of Japan, which is currently building the fastest and most powerful A.I. based supercomputer in Japan. Additionally, numerous other independent efforts have been made in Taiwan with the focus on the rapid development of exascale supercomputing technology, such as Foxconn Corporation which recently designed and built the largest and fastest supercomputer in all of Taiwan. This new Foxconn supercomputer is designed to serve as a stepping stone in research and development towards the design and building of a state of the art exascale supercomputer.
In 2012, the Indian Government proposed to commit US$2.5 billion to supercomputing research during the 12th five-year plan period (2012–2017). The project was to be handled by Indian Institute of Science (IISc), Bangalore. Additionally, it was later revealed that India plans to develop a supercomputer with processing power in the exaFLOPS range. It will be developed by C-DAC within the subsequent five years of approval. These supercomputers will use indigenously developed microprocessors by C-DAC in India. In 2023, in a presentation by CDAC, it plans to have a indigenously developed exascale supercomputer named Param Shankh. The Param Shankh will be powered by an indigenous 96 core, ARM architecture-based processor which has been nicknamed AUM (ॐ).