Do you ever wonder what a supercomputer is? Is it the sum of processors or the quantity of RAM? Does a supercomputer occupy a certain amount of space? Read this article about the ten fastest supercomputers of the world.
The Control Data Corporation (CDC) 6600 was the first supercomputer on earth. It only had a single CPU. The CDC 6600, released in 1964, was about the size of four file cabinets.
The cost was around $8 million and could operate at up to 40 MHz, squeezing out a peak output of 3 million flops.
Fujitsu and Riken have designed the world’s top-level Fugaku supercomputer. It delivers with high efficiency for a wide variety of software applications. It will start full service in the fiscal year 2021.
The supercomputer uses the Fujitsu A64FX microprocessor. Fugaku is an alternate name for Mount Fuji. Fugaku can conduct more than 442 quadrillion computations per second. It is about three times faster than the U.S.-developed Summit supercomputer.
Fujitsu Ltd built the Japanese supercomputer at Riken’s Kobe facility. It forms a critical foundation of powerful simulations. It will help scientific research, industrial and military technological development.
For the first time in history, the same supercomputer became No.1 on the Top500, HPCG, and Graph500. On June 22 at the ISC High Performance 2020 Digital. It is an international high-performance computing conference.
Above all, it achieved a LINPACK score of 415.53 petaflops on the Top500. It was a far higher score than its nearest rival, Summit of the United States. It performed with 148.6 petaflops, using 152,064 of its total 158,976 nodes.
At HPCG, it scored 13,400 teraflops using 138,240 nodes, and on HPL-AI, it obtained a score of 1,421 exaflops. The first time a device has even got an exascale ranking on any list, using 126,720 nodes.
Fugaku was a part of Japan’s initiative to build the next-gen flagship supercomputer.
The machine will operate in the RIKEN center for computational science (R-CSS) in Kobe. In conclusion, it will introduce a wide range of applications. It will address high-priority social and scientific issues.
The Sierra supercomputer serves the three nuclear safety laboratories. They are Sandia, Los Alamos National Laboratories, and NNSA-LLNL.
It provides high-fidelity simulations to support the core mission of NNSA. This ensures security safety. The efficiency of the nuclear stockpiles increased after the arrival of Sierra. It is a result of years of procurement, development, code implementation, and deployment.
In close collaboration with IBM, NVIDIA, and Mellanox. It involves hundreds of computer scientists, developers, and operations staff.
Sierra can perform 125 petaFLOPS. It also has a peak performance of 125 quadrillion floating-point operations per second. Thus, Sierra can perform at least six to ten times more depending on the application thrown to it.
While Sequoia at LLNL can perform 20 petaFLOP. It is currently the eighth fastest supercomputer in the world, with a 125 petaFLOP/s peak. The IBM-built Sierra supercomputer offers four to six times the sustained performance. Additionally, five to seven times the workload performance of Sequoia.
Above all, Sierra is also about five times more energy-efficient than Sequoia. It can save about 11 megawatts. Sierra integrates two kinds of processor chips. They are Power 9 processors from IBM and Volta graphics processing units from NVIDIA.
The Sunway-Taihulight topped the list of the 500 most powerful supercomputers. It performed 93 quadrillion operations per second to do so, dethroning China’s Tianhe-2.
Sunway-TaihuLight is twice as fast and three times as potent as Tianhe-2. Also, it can perform 33.86 quadrillion calculations per second. It is making use of 10,649,600 computing cores comprising 40,960 nodes.
The Chinese National Research Center developed the new system. They installed it at the Wuxi National Supercomputing Center.
It has also proved to be more energy-efficient than Tianhe-2. Which was, in fact, the fastest supercomputer in the world for the past six years.
Sunway-Taihulight can perform 6 billion operations using only 1 watt of power. It is about three times what Tianhe-2 would consume for the same calculations. It recorded 33.86petaflops per second.
China has channeled 1.8 billion yuan for the development of Sunway-TaihuLight. It is about one-third of the total fund, which was from the central government. In contrast, local governments of Jiangsu province and Wuxi provided the other two-thirds.
The Sunway TaihuLight uses a homegrown processor system. This shows China’s progress in developing and producing large-scale computer systems.
The computer node on this system has its base on the SW26010 processor. It’s a multi-core processor chip. Also, Each processor consists of four Management Processing Elements (MPEs). Four Computing Processing Elements (CPEs) (260 cores in total). Four Memory Controllers (MC) and a Machine Interface-connected Network on Chip.
For instance, each of the four CPE, MPE, and MC has access to 8 GB DDR3 memory. In the entire scheme, there are 40,960 nodes.
HPC5 is a cluster of computers working together to increase total output. Or in other words, it functions as a set of many computers. The capacity of HPC5 is three times that of HPC4, its predecessor.
A hybrid architecture designed for it maximized total performance. In this architecture, HPC5 and HPC4 both use GPU computation. This kept the energy consumption to a least. This way, it can measure large volumes of data with much greater efficiency.
Indeed this GPU can carry out over 10,000 million billion operations. But only burning one watt of electricity. Furthermore, every single device comes with two CPU sockets.
For graphics accelerators, HPC5 has four sockets, while HPC4 had two sockets. In total, it has access to more than 3400 computing processors and 10,000 graphics cards.
In conclusion, this computer’s performance can process subsoil data. It uses too sophisticated in-house algorithms for this. The system creates in-depth subsoil models using this data. Also, HPC5 processes geophysical and seismic data from all over the world.
Geologists can determine what lies several kilometers below the surface. Indeed, that’s how we found Zohr, the largest gas field in the Mediterranean.
The National University of Defense Technology has developed Tianhe-2. Its development took place in Changsha city in central China. It can sustain a computation of 33.86 petaflops per second.
Besides, it is the same as performing 33,860 trillion operations per second. Tianhe-2 has 3120000 processor cores powered by the Ivy Bridge and Intel’s Xeon Phi chips.
Moreover, it received funding from the 863 Hi-tech programs of the Chinese government. The goal was to make the country less dependent on overseas rivals. Also, to make its hi-tech industry more competitive.
Many of its features are unique in China. They include:
- A custom-designed network of interconnections that routes data across the device.
- 4,096 Galaxy FT-1500 CPUs developed by the National University of Defense Technology. It manages particular weather forecasting applications and national defense.
- It runs Kylin operating system. The university developed it as a high-security alternative. It is for users in government, defense, electricity, aerospace, and other critical industries.
The output of the Tianhe-2 on paper is double that of the following machines on this list.
According to the National University of Defense Technology. The Tianhe-2 will perform simulation, research, and government security applications.
But, it has a $385 million price tag that occupies 7,750 square feet of space. Some people think it’s a bit of overkill for what it’s going to perform.
The new cluster MARCONI 100 runs on the IBM Power9 architecture and Volta NVIDIA GPUs. Cineca acquired it under the European PPI4HPC initiative.
This system opens the way for the Leonardo Supercomputer pre-exascale. It will be complete in 2021.
It was open to the Italian public and industrial researchers from April 2020. Its potential for computation is around 32 PFlops.
Based on measurements performed by CINECA. The Marconi 100 provides almost 32 quadrillion calculations per second.
Marconi 100, through the PRACE project, will assist European and Italian researchers.
The Oak Ridge National Laboratory unveiled Summit. It is the most intelligent, most powerful supercomputer globally.
Summit pumps out 200 million billion floating-point operations (200 exaFlOPS) for high-performance computing.
It performs 3 billion (1018) operations per second (3 exaflops) for AI applications. For this, it uses approximately 28,000 NVIDIA Volta Tensor Core GPUs.
Summit can combine HPC and AI techniques. It will provide researchers with the ability to automate and speed up developments. It will serve areas such as health, electricity, and engineering.
The Summit runs eight times faster than the previous Titan supercomputer. It will assist scientists and researchers in exploring new technologies.
Using machine learning and deep learning on a large scale can improve our economy. It can also improve health care and the development of electricity.
Piz Daint supercomputer
The power of Piz Daint at the Swiss National Supercomputing Centre is 7.8 petaFlops. This means it can perform 7.8 quadrillion math operations per second.
It has the power to compute over a typical laptop that could calculate in 900 years in one day.
Also, with 5’272 computing nodes, Piz Daint is a 28 cabinet Cray XC30 device. The computing nodes run a 64-bit Intel SandyBridge 8-core CPU (Intel® Xeon® E5-2670). A 6 GB GDDR5 NVIDIA® Tesla® K20X, and a 32 GB host memory.
The nodes connect with a dragonfly network topology through Cray’s proprietary “Aries” interconnect.
The Trinity supercomputer provides the NNSA Nuclear Security Enterprise with enhanced computing capability. It also supports ever-demanding workloads, e.g., increase geometric and physics fidelity. At the same time, it is maintaining total solution time requirements.
The Trinity supports the Stockpile Stewardship program certification and evaluation of NNSA. It ensures that its nuclear stockpile is safe, stable, and secure.
Both Los Alamos National Laboratory and Sandia National Laboratories run the Trinity project. The Nicholas Metropolis Center for Modeling and Simulation hosts the computer.
There were two phases of the construction of Trinity. The first phase saw the integration of The Intel Xeon Haswell processor.
The second phase saw the Intel Xeon Phi Knights Landing Processor’s addition. It added a significant performance boost.
Besides, there are 301,952 Haswell and 678,912 Knights Landing processors combined. It is providing a cumulative peak output of over 40 PF/s (Petaflops per second).
National Science Foundation awarded the Texas Advanced Computing Center a $60 million grant. With this fund, they could deploy a new petascale computing device, Frontera, in 2018.
Thus, Frontera opened up new possibilities. For instance, it offers computational capabilities enabling researchers to tackle even more challenges.
Frontera comprises two computing subsystems. A primary computing system based on dual output precision. The second subsystem focused on single streaming-memory computing precision.
Also, Frontera also has several storage systems and cloud and archive device interfaces. It also includes a collection of virtual server hosting application nodes.
On the high-performance LINPACK benchmark, Frontera achieved 23.5 PetaFLOPS. The estimated peak efficiency of the main machine would be 38.7 PetaFLOPS.