fbpx
Wikipedia

IBM Blue Gene

Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS (PFLOPS) range, with low power consumption.

IBM Blue Gene
A Blue Gene/P supercomputer at Argonne National Laboratory
DeveloperIBM
TypeSupercomputer platform
Release dateBG/L: Feb 1999 (Feb 1999)
BG/P: June 2007
BG/Q: Nov 2011
Discontinued2015 (2015)
CPUBG/L: PowerPC 440
BG/P: PowerPC 450
BG/Q: PowerPC A2
PredecessorIBM RS/6000 SP;
QCDOC
SuccessorIBM PERCS
Hierarchy of Blue Gene processing units

The project created three generations of supercomputers, Blue Gene/L, Blue Gene/P, and Blue Gene/Q. During their deployment, Blue Gene systems often led the TOP500[1] and Green500[2] rankings of the most powerful and most power-efficient supercomputers, respectively. Blue Gene systems have also consistently scored top positions in the Graph500 list.[3] The project was awarded the 2009 National Medal of Technology and Innovation.[4]

As of 2015, IBM appears to have ended development of the Blue Gene family, though no formal announcement has been made.[5] IBM has since focused its supercomputer efforts on the OpenPower platform, using accelerators such as FPGAs and GPUs to address the diminishing returns of Moore's law.[6]

History edit

In December 1999, IBM announced a US$100 million research initiative for a five-year effort to build a massively parallel computer, to be applied to the study of biomolecular phenomena such as protein folding.[7] The project had two main goals: to advance our understanding of the mechanisms behind protein folding via large-scale simulation, and to explore novel ideas in massively parallel machine architecture and software. Major areas of investigation included: how to use this novel platform to effectively meet its scientific goals, how to make such massively parallel machines more usable, and how to achieve performance targets at a reasonable cost, through novel machine architectures. The initial design for Blue Gene was based on an early version of the Cyclops64 architecture, designed by Monty Denneau. The initial research and development work was pursued at IBM T. J. Watson Research Center and led by William R. Pulleyblank.[8]

At IBM, Alan Gara started working on an extension of the QCDOC architecture into a more general-purpose supercomputer: The 4D nearest-neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other; and a parallel I/O subsystem was added. DOE started funding the development of this system and it became known as Blue Gene/L (L for Light); development of the original Blue Gene system continued under the name Blue Gene/C (C for Cyclops) and, later, Cyclops64.

In November 2004 a 16-rack system, with each rack holding 1,024 compute nodes, achieved first place in the TOP500 list, with a Linpack performance of 70.72 TFLOPS.[1] It thereby overtook NEC's Earth Simulator, which had held the title of the fastest computer in the world since 2002. From 2004 through 2007 the Blue Gene/L installation at LLNL[9] gradually expanded to 104 racks, achieving 478 TFLOPS Linpack and 596 TFLOPS peak. The LLNL BlueGene/L installation held the first position in the TOP500 list for 3.5 years, until in June 2008 it was overtaken by IBM's Cell-based Roadrunner system at Los Alamos National Laboratory, which was the first system to surpass the 1 PetaFLOPS mark. The system was built in Rochester, MN IBM plant.

While the LLNL installation was the largest Blue Gene/L installation, many smaller installations followed. In November 2006, there were 27 computers on the TOP500 list using the Blue Gene/L architecture. All these computers were listed as having an architecture of eServer Blue Gene Solution. For example, three racks of Blue Gene/L were housed at the San Diego Supercomputer Center.

While the TOP500 measures performance on a single benchmark application, Linpack, Blue Gene/L also set records for performance on a wider set of applications. Blue Gene/L was the first supercomputer ever to run over 100 TFLOPS sustained on a real-world application, namely a three-dimensional molecular dynamics code (ddcMD), simulating solidification (nucleation and growth processes) of molten metal under high pressure and temperature conditions. This achievement won the 2005 Gordon Bell Prize.

In June 2006, NNSA and IBM announced that Blue Gene/L achieved 207.3 TFLOPS on a quantum chemical application (Qbox).[10] At Supercomputing 2006,[11] Blue Gene/L was awarded the winning prize in all HPC Challenge Classes of awards.[12] In 2007, a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second (the network was run at 1/10 of normal speed for 10 seconds).[13]

The name edit

The name Blue Gene comes from what it was originally designed to do, help biologists understand the processes of protein folding and gene development.[14] "Blue" is a traditional moniker that IBM uses for many of its products and the company itself. The original Blue Gene design was renamed "Blue Gene/C" and eventually Cyclops64. The "L" in Blue Gene/L comes from "Light" as that design's original name was "Blue Light". The "P" version was designed to be a petascale design. "Q" is just the letter after "P". There is no Blue Gene/R.[15]

Major features edit

The Blue Gene/L supercomputer was unique in the following aspects:[16]

  • Trading the speed of processors for lower power consumption. Blue Gene/L used low frequency and low power embedded PowerPC cores with floating-point accelerators. While the performance of each chip was relatively low, the system could achieve better power efficiency for applications that could use large numbers of nodes.
  • Dual processors per node with two working modes: co-processor mode where one processor handles computation and the other handles communication; and virtual-node mode, where both processors are available to run user code, but the processors share both the computation and the communication load.
  • System-on-a-chip design. Components were embedded on a single chip for each node, with the exception of 512 MB external DRAM.
  • A large number of nodes (scalable in increments of 1024 up to at least 65,536)
  • Three-dimensional torus interconnect with auxiliary networks for global communications (broadcast and reductions), I/O, and management
  • Lightweight OS per node for minimum system overhead (system noise).

Architecture edit

The Blue Gene/L architecture was an evolution of the QCDSP and QCDOC architectures. Each Blue Gene/L Compute or I/O node was a single ASIC with associated DRAM memory chips. The ASIC integrated two 700 MHz PowerPC 440 embedded processors, each with a double-pipeline-double-precision Floating-Point Unit (FPU), a cache sub-system with built-in DRAM controller and the logic to support multiple communication sub-systems. The dual FPUs gave each Blue Gene/L node a theoretical peak performance of 5.6 GFLOPS (gigaFLOPS). The two CPUs were not cache coherent with one another.

Compute nodes were packaged two per compute card, with 16 compute cards plus up to 2 I/O nodes per node board. There were 32 node boards per cabinet/rack.[17] By the integration of all essential sub-systems on a single chip, and the use of low-power logic, each Compute or I/O node dissipated low power (about 17 watts, including DRAMs). This allowed aggressive packaging of up to 1024 compute nodes, plus additional I/O nodes, in a standard 19-inch rack, within reasonable limits of electrical power supply and air cooling. The performance metrics, in terms of FLOPS per watt, FLOPS per m2 of floorspace and FLOPS per unit cost, allowed scaling up to very high performance. With so many nodes, component failures were inevitable. The system was able to electrically isolate faulty components, down to a granularity of half a rack (512 compute nodes), to allow the machine to continue to run.

Each Blue Gene/L node was attached to three parallel communications networks: a 3D toroidal network for peer-to-peer communication between compute nodes, a collective network for collective communication (broadcasts and reduce operations), and a global interrupt network for fast barriers. The I/O nodes, which run the Linux operating system, provided communication to storage and external hosts via an Ethernet network. The I/O nodes handled filesystem operations on behalf of the compute nodes. Finally, a separate and private Ethernet network provided access to any node for configuration, booting and diagnostics. To allow multiple programs to run concurrently, a Blue Gene/L system could be partitioned into electronically isolated sets of nodes. The number of nodes in a partition had to be a positive integer power of 2, with at least 25 = 32 nodes. To run a program on Blue Gene/L, a partition of the computer was first to be reserved. The program was then loaded and run on all the nodes within the partition, and no other program could access nodes within the partition while it was in use. Upon completion, the partition nodes were released for future programs to use.

Blue Gene/L compute nodes used a minimal operating system supporting a single user program. Only a subset of POSIX calls was supported, and only one process could run at a time on node in co-processor mode—or one process per CPU in virtual mode. Programmers needed to implement green threads in order to simulate local concurrency. Application development was usually performed in C, C++, or Fortran using MPI for communication. However, some scripting languages such as Ruby[18] and Python[19] have been ported to the compute nodes.

IBM published BlueMatter, the application developed to exercise Blue Gene/L, as open source here.[20] This serves to document how the torus and collective interfaces were used by applications, and may serve as a base for others to exercise the current generation of supercomputers.

Blue Gene/P edit

 
A Blue Gene/P node card
 
A schematic overview of a Blue Gene/P supercomputer

In June 2007, IBM unveiled Blue Gene/P, the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM, LLNL, and Argonne National Laboratory's Leadership Computing Facility.[21]

Design edit

The design of Blue Gene/P is a technology evolution from Blue Gene/L. Each Blue Gene/P Compute chip contains four PowerPC 450 processor cores, running at 850 MHz. The cores are cache coherent and the chip can operate as a 4-way symmetric multiprocessor (SMP). The memory subsystem on the chip consists of small private L2 caches, a central shared 8 MB L3 cache, and dual DDR2 memory controllers. The chip also integrates the logic for node-to-node communication, using the same network topologies as Blue Gene/L, but at more than twice the bandwidth. A compute card contains a Blue Gene/P chip with 2 or 4 GB DRAM, comprising a "compute node". A single compute node has a peak performance of 13.6 GFLOPS. 32 Compute cards are plugged into an air-cooled node board. A rack contains 32 node boards (thus 1024 nodes, 4096 processor cores).[22] By using many small, low-power, densely packaged chips, Blue Gene/P exceeded the power efficiency of other supercomputers of its generation, and at 371 MFLOPS/W Blue Gene/P installations ranked at or near the top of the Green500 lists in 2007-2008.[2]

Installations edit

The following is an incomplete list of Blue Gene/P installations. Per November 2009, the TOP500 list contained 15 Blue Gene/P installations of 2-racks (2048 nodes, 8192 processor cores, 23.86 TFLOPS Linpack) and larger.[1]

  • On November 12, 2007, the first Blue Gene/P installation, JUGENE, with 16 racks (16,384 nodes, 65,536 processors) was running at Forschungszentrum Jülich in Germany with a performance of 167 TFLOPS.[23] When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world. In 2009, JUGENE was upgraded to 72 racks (73,728 nodes, 294,912 processor cores) with 144 terabytes of memory and 6 petabytes of storage, and achieved a peak performance of 1 PetaFLOPS. This configuration incorporated new air-to-water heat exchangers between the racks, reducing the cooling cost substantially.[24] JUGENE was shut down in July 2012 and replaced by the Blue Gene/Q system JUQUEEN.
  • The 40-rack (40960 nodes, 163840 processor cores) "Intrepid" system at Argonne National Laboratory was ranked #3 on the June 2008 Top 500 list.[25] The Intrepid system is one of the major resources of the INCITE program, in which processor hours are awarded to "grand challenge" science and engineering projects in a peer-reviewed competition.
  • Lawrence Livermore National Laboratory installed a 36-rack Blue Gene/P installation, "Dawn", in 2009.
  • The King Abdullah University of Science and Technology (KAUST) installed a 16-rack Blue Gene/P installation, "Shaheen", in 2009.
  • In 2012, a 6-rack Blue Gene/P was installed at Rice University and will be jointly administered with the University of São Paulo.[26]
  • A 2.5 rack Blue Gene/P system is the central processor for the Low Frequency Array for Radio astronomy (LOFAR) project in the Netherlands and surrounding European countries. This application uses the streaming data capabilities of the machine.
  • A 2-rack Blue Gene/P was installed in September 2008 in Sofia, Bulgaria, and is operated by the Bulgarian Academy of Sciences and Sofia University.[27]
  • In 2010, a 2-rack (8192-core) Blue Gene/P was installed at the University of Melbourne for the Victorian Life Sciences Computation Initiative.[28]
  • In 2011, a 2-rack Blue Gene/P was installed at University of Canterbury in Christchurch, New Zealand.
  • In 2012, a 2-rack Blue Gene/P was installed at Rutgers University in Piscataway, New Jersey. It was dubbed "Excalibur" as an homage to the Rutgers mascot, the Scarlet Knight.[29]
  • In 2008, a 1-rack (1024 nodes) Blue Gene/P with 180 TB of storage was installed at the University of Rochester in Rochester, New York.[30]
  • The first Blue Gene/P in the ASEAN region was installed in 2010 at the Universiti of Brunei Darussalam’s research centre, the UBD-IBM Centre. The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate the impact of climate change on flood forecasting, crop yields, renewable energy and the health of rainforests in the region among others.[31]
  • In 2013, a 1-rack Blue Gene/P was donated to the Department of Science and Technology for weather forecasts, disaster management, precision agriculture, and health it is housed in the National Computer Center, Diliman, Quezon City, under the auspices of Philippine Genome Center (PGC) Core Facility for Bioinformatics (CFB) at UP Diliman, Quezon City.[32]

Applications edit

  • Veselin Topalov, the challenger to the World Chess Champion title in 2010, confirmed in an interview that he had used a Blue Gene/P supercomputer during his preparation for the match.[33]
  • The Blue Gene/P computer has been used to simulate approximately one percent of a human cerebral cortex, containing 1.6 billion neurons with approximately 9 trillion connections.[34]
  • The IBM Kittyhawk project team has ported Linux to the compute nodes and demonstrated generic Web 2.0 workloads running at scale on a Blue Gene/P. Their paper, published in the ACM Operating Systems Review, describes a kernel driver that tunnels Ethernet over the tree network, which results in all-to-all TCP/IP connectivity.[35][36] Running standard Linux software like MySQL, their performance results on SpecJBB rank among the highest on record.[citation needed]
  • In 2011, a Rutgers University / IBM / University of Texas team linked the KAUST Shaheen installation together with a Blue Gene/P installation at the IBM Watson Research Center into a "federated high performance computing cloud", winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application.[37]

Blue Gene/Q edit

 
The IBM Blue Gene/Q installed at Argonne National Laboratory, near Chicago, Illinois

The third supercomputer design in the Blue Gene series, Blue Gene/Q has a peak performance of 20 Petaflops,[38] reaching LINPACK benchmarks performance of 17 Petaflops. Blue Gene/Q continues to expand and enhance the Blue Gene/L and /P architectures.

Design edit

The Blue Gene/Q Compute chip is an 18-core chip. The 64-bit A2 processor cores are 4-way simultaneously multithreaded, and run at 1.6 GHz. Each processor core has a SIMD quad-vector double-precision floating-point unit (IBM QPX). 16 Processor cores are used for computing, and a 17th core for operating system assist functions such as interrupts, asynchronous I/O, MPI pacing and RAS. The 18th core is used as a redundant spare, used to increase manufacturing yield. The spared-out core is shut down in functional operation. The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache, operating at half core speed. The L2 cache is multi-versioned, supporting transactional memory and speculative execution, and has hardware support for atomic operations.[39] L2 cache misses are handled by two built-in DDR3 memory controllers running at 1.33 GHz. The chip also integrates logic for chip-to-chip communications in a 5D torus configuration, with 2GB/s chip-to-chip links. The Blue Gene/Q chip is manufactured on IBM's copper SOI process at 45 nm. It delivers a peak performance of 204.8 GFLOPS at 1.6 GHz, drawing about 55 watts. The chip measures 19×19 mm (359.5 mm²) and comprises 1.47 billion transistors. The chip is mounted on a compute card along with 16 GB DDR3 DRAM (i.e., 1 GB for each user processor core).[40]

A Q32[41] compute drawer contains 32 compute cards, each water cooled.[42] A "midplane" (crate) contains 16 Q32 compute drawers for a total of 512 compute nodes, electrically interconnected in a 5D torus configuration (4x4x4x4x2). Beyond the midplane level, all connections are optical. Racks have two midplanes, thus 32 compute drawers, for a total of 1024 compute nodes, 16,384 user cores and 16 TB RAM.[42]

Separate I/O drawers, placed at the top of a rack or in a separate rack, are air cooled and contain 8 compute cards and 8 PCIe expansion slots for InfiniBand or 10 Gigabit Ethernet networking.[42]

Performance edit

At the time of the Blue Gene/Q system announcement in November 2011, an initial 4-rack Blue Gene/Q system (4096 nodes, 65536 user processor cores) achieved #17 in the TOP500 list[1] with 677.1 TeraFLOPS Linpack, outperforming the original 2007 104-rack BlueGene/L installation described above. The same 4-rack system achieved the top position in the Graph500 list[3] with over 250 GTEPS (giga traversed edges per second). Blue Gene/Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2.1 GFLOPS/W.[2]

In June 2012, Blue Gene/Q installations took the top positions in all three lists: TOP500,[1] Graph500[3] and Green500.[2]

Installations edit

The following is an incomplete list of Blue Gene/Q installations. Per June 2012, the TOP500 list contained 20 Blue Gene/Q installations of 1/2-rack (512 nodes, 8192 processor cores, 86.35 TFLOPS Linpack) and larger.[1] At a (size-independent) power efficiency of about 2.1 GFLOPS/W, all these systems also populated the top of the June 2012 Green 500 list.[2]

  • A Blue Gene/Q system called Sequoia was delivered to the Lawrence Livermore National Laboratory (LLNL) beginning in 2011 and was fully deployed in June 2012. It is part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research. It consists of 96 racks (comprising 98,304 compute nodes with 1.6 million processor cores and 1.6 PB of memory) covering an area of about 3,000 square feet (280 m2).[43] In June 2012, the system was ranked as the world's fastest supercomputer.[44][45] at 20.1 PFLOPS peak, 16.32 PFLOPS sustained (Linpack), drawing up to 7.9 megawatts of power.[1] In June 2013, its performance is listed at 17.17 PFLOPS sustained (Linpack).[1]
  • A 10 PFLOPS (peak) Blue Gene/Q system called Mira was installed at Argonne National Laboratory in the Argonne Leadership Computing Facility in 2012. It consist of 48 racks (49,152 compute nodes), with 70 PB of disk storage (470 GB/s I/O bandwidth).[46][47]
  • JUQUEEN at the Forschungzentrum Jülich is a 28-rack Blue Gene/Q system, and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500.[1]
  • Vulcan at Lawrence Livermore National Laboratory (LLNL) is a 24-rack, 5 PFLOPS (peak), Blue Gene/Q system that was commissioned in 2012 and decommissioned in 2019.[48] Vulcan served Lab-industry projects through Livermore's High Performance Computing (HPC) Innovation Center[49] as well as academic collaborations in support of DOE/National Nuclear Security Administration (NNSA) missions.[50]
  • Fermi at the CINECA Supercomputing facility, Bologna, Italy,[51] is a 10-rack, 2 PFLOPS (peak), Blue Gene/Q system.
  • As part of DiRAC, the EPCC hosts a 6 rack (6144-node) Blue Gene/Q system at the University of Edinburgh[52]
  • A five rack Blue Gene/Q system with additional compute hardware called AMOS was installed at Rensselaer Polytechnic Institute in 2013.[53] The system was rated at 1048.6 teraflops, the most powerful supercomputer at any private university, and third most powerful supercomputer among all universities in 2014.[54]
  • An 838 TFLOPS (peak) Blue Gene/Q system called Avoca was installed at the Victorian Life Sciences Computation Initiative in June, 2012.[55] This system is part of a collaboration between IBM and VLSCI, with the aims of improving diagnostics, finding new drug targets, refining treatments and furthering our understanding of diseases.[56] The system consists of 4 racks, with 350 TB of storage, 65,536 cores, 64 TB RAM.[57]
  • A 209 TFLOPS (peak) Blue Gene/Q system was installed at the University of Rochester in July, 2012.[58] This system is part of the Health Sciences Center for Computational Innovation 2012-10-19 at the Wayback Machine, which is dedicated to the application of high-performance computing to research programs in the health sciences. The system consists of a single rack (1,024 compute nodes) with 400 TB of high-performance storage.[59]
  • A 209 TFLOPS peak (172 TFLOPS LINPACK) Blue Gene/Q system called Lemanicus was installed at the EPFL in March 2013.[60] This system belongs to the Center for Advanced Modeling Science CADMOS ([61]) which is a collaboration between the three main research institutions on the shore of the Lake Geneva in the French speaking part of Switzerland : University of Lausanne, University of Geneva and EPFL. The system consists of a single rack (1,024 compute nodes) with 2.1 PB of IBM GPFS-GSS storage.
  • A half-rack Blue Gene/Q system, with about 100 TFLOPS (peak), called Cumulus was installed at A*STAR Computational Resource Centre, Singapore, at early 2011.[62]

Applications edit

Record-breaking science applications have been run on the BG/Q, the first to cross 10 petaflops of sustained performance. The cosmology simulation framework HACC achieved almost 14 petaflops with a 3.6 trillion particle benchmark run,[63] while the Cardioid code,[64][65] which models the electrophysiology of the human heart, achieved nearly 12 petaflops with a near real-time simulation, both on Sequoia. A fully compressible flow solver has also achieved 14.4 PFLOP/s (originally 11 PFLOP/s) on Sequoia, 72% of the machine's nominal peak performance.[66]

See also edit

References edit

  1. ^ a b c d e f g h i "November 2004 - TOP500 Supercomputer Sites". Top500.org. Retrieved 13 December 2019.
  2. ^ a b c d e . Green500.org. Archived from the original on 26 August 2016. Retrieved 13 October 2017.
  3. ^ a b c . Archived from the original on 2011-12-27.
  4. ^ Harris, Mark (September 18, 2009). "Obama honours IBM supercomputer". Techradar.com. Retrieved 2009-09-18.
  5. ^ "Supercomputing Strategy Shifts in a World Without BlueGene". Nextplatform.com. 14 April 2015. Retrieved 13 October 2017.
  6. ^ . EETimes. Archived from the original on 30 April 2017. Retrieved 13 October 2017.
  7. ^ "Blue Gene: A Vision for Protein Science using a Petaflop Supercomputer" (PDF). IBM Systems Journal. 40 (2). 2017-10-23.
  8. ^ "A Talk with the Brain behind Blue Gene", BusinessWeek, November 6, 2001, archived from the original on December 11, 2014
  9. ^ . Archived from the original on 2011-07-18. Retrieved 2007-10-05.
  10. ^ . Archived from the original on September 28, 2007.
  11. ^ "SC06". sc06.supercomputing.org. Retrieved 13 October 2017.
  12. ^ . Archived from the original on 2006-12-11. Retrieved 2006-12-03.
  13. ^ . BBC News. April 27, 2007. Archived from the original on 2007-05-25.
  14. ^ "IBM100 - Blue Gene". 03.ibm.com. 7 March 2012. Retrieved 13 October 2017.
  15. ^ Kunkel, Julian M.; Ludwig, Thomas; Meuer, Hans (12 June 2013). Supercomputing: 28th International Supercomputing Conference, ISC 2013, Leipzig, Germany, June 16-20, 2013. Proceedings. Springer. ISBN 9783642387500. Retrieved 13 October 2017 – via Google Books.
  16. ^ "Blue Gene". IBM Journal of Research and Development. 49 (2/3). 2005.
  17. ^ Kissel, Lynn. . asc.llnl.gov. Archived from the original on 17 February 2013. Retrieved 13 October 2017.
  18. ^ . www.ece.iastate.edu. Archived from the original on February 11, 2009.
  19. ^ William Scullin (March 12, 2011). Python for High Performance Computing. Atlanta, GA.
  20. ^ Blue Matter source code, retrieved February 28, 2020
  21. ^ "IBM Triples Performance of World's Fastest, Most Energy-Efficient Supercomputer". 2007-06-27. Retrieved 2011-12-24.
  22. ^ "Overview of the IBM Blue Gene/P project". IBM Journal of Research and Development. 52: 199–220. Jan 2008. doi:10.1147/rd.521.0199.
  23. ^ "Supercomputing: Jülich Amongst World Leaders Again". IDG News Service. 2007-11-12.
  24. ^ "IBM Press room - 2009-02-10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe's Most Powerful". 03.ibm.com. 2009-02-10. Retrieved 2011-03-11.
  25. ^ . Mcs.anl.gov. Archived from the original on 8 February 2009. Retrieved 13 October 2017.
  26. ^ . news.rice.edu. Archived from the original on 2012-04-05. Retrieved 2012-04-01.
  27. ^ Вече си имаме и суперкомпютър 2009-12-23 at the Wayback Machine, Dir.bg, 9 September 2008
  28. ^ "IBM Press room - 2010-02-11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research - Australia". 03.ibm.com. 2010-02-11. Retrieved 2011-03-11.
  29. ^ . Archived from the original on 2013-03-06. Retrieved 2013-09-07.
  30. ^ . University of Rochester Medical Center. May 11, 2012. Archived from the original on 2012-05-11.
  31. ^ "IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research". IBM News Room. 2010-10-13. Retrieved 18 October 2012.
  32. ^ Ronda, Rainier Allan. "DOST's supercomputer for scientists now operational". Philstar.com. Retrieved 13 October 2017.
  33. ^ . Players.chessdo.com. Archived from the original on 19 May 2013. Retrieved 13 October 2017.
  34. ^ Kaku, Michio. Physics of the Future (New York: Doubleday, 2011), 91.
  35. ^ "Project Kittyhawk: A Global-Scale Computer". Research.ibm.com. Retrieved 13 October 2017.
  36. ^ Appavoo, Jonathan; Uhlig, Volkmar; Waterland, Amos. "Project Kittyhawk: Building a Global-Scale Computer" (PDF). Yorktown Heights, NY: IBM T.J. Watson Research Center. Archived from the original on 2008-10-31. Retrieved 2018-03-13.{{cite web}}: CS1 maint: bot: original URL status unknown (link)
  37. ^ . News.rutgers.edu. 2011-07-06. Archived from the original on 2011-11-10. Retrieved 2011-12-24.
  38. ^ "IBM announces 20-petaflops supercomputer". Kurzweil. 18 November 2011. Retrieved 13 November 2012. IBM has announced the Blue Gene/Q supercomputer, with peak performance of 20 petaflops
  39. ^ "Memory Speculation of the Blue Gene/Q Compute Chip". Retrieved 2011-12-23.
  40. ^ (PDF). Archived from the original (PDF) on 2015-04-29. Retrieved 2011-12-23.
  41. ^ "IBM Blue Gene/Q supercomputer delivers petascale computing for high-performance computing applications" (PDF). 01.ibm.com. Retrieved 13 October 2017.
  42. ^ a b c "IBM uncloaks 20 petaflops BlueGene/Q super". The Register. 2010-11-22. Retrieved 2010-11-25.
  43. ^ Feldman, Michael (2009-02-03). . HPCwire. Archived from the original on 2009-02-12. Retrieved 2011-03-11.
  44. ^ B Johnston, Donald (2012-06-18). . Archived from the original on 2014-09-02. Retrieved 2012-06-23.
  45. ^ . Archived from the original on June 24, 2012.
  46. ^ "MIRA: World's fastest supercomputer - Argonne Leadership Computing Facility". Alcf.anl.gov. Retrieved 13 October 2017.
  47. ^ "Mira - Argonne Leadership Computing Facility". Alcf.anl.gov. Retrieved 13 October 2017.
  48. ^ "Vulcan—decommissioned". hpc.llnl.gov. Retrieved 10 April 2019.
  49. ^ "HPC Innovation Center". hpcinnovationcenter.llnl.gov. Retrieved 13 October 2017.
  50. ^ . Llnl.gov. 11 June 2013. Archived from the original on 9 December 2013. Retrieved 13 October 2017.
  51. ^ . Archived from the original on 2013-10-30. Retrieved 2013-05-13.
  52. ^ "DiRAC BlueGene/Q". epcc.ed.ac.uk.
  53. ^ "Rensselaer at Petascale: AMOS Among the World's Fastest and Most Powerful Supercomputers". News.rpi.edu. Retrieved 13 October 2017.
  54. ^ Michael Mullaneyvar. "AMOS Ranks 1st Among Supercomputers at Private American Universities". News.rpi.edi. Retrieved 13 October 2017.
  55. ^ . Themelbourneengineer.eng.unimelb.edu.au/. 16 February 2012. Archived from the original on 2 October 2017. Retrieved 13 October 2017.
  56. ^ "Melbourne Bioinformatics - For all researchers and students based in Melbourne's biomedical and bioscience research precinct". Melbourne Bioinformatics. Retrieved 13 October 2017.
  57. ^ "Access to High-end Systems - Melbourne Bioinformatics". Vlsci.org.au. Retrieved 13 October 2017.
  58. ^ "University of Rochester Inaugurates New Era of Health Care Research". Rochester.edu. Retrieved 13 October 2017.
  59. ^ "Resources - Center for Integrated Research Computing". Circ.rochester.edu. Retrieved 13 October 2017.
  60. ^ . Archived from the original on 2007-12-10. Retrieved 2021-03-10.
  61. ^ Utilisateur, Super. . Cadmos.org. Archived from the original on 10 January 2016. Retrieved 13 October 2017.
  62. ^ . Acrc.a-star.edu.sg. Archived from the original on 2016-12-20. Retrieved 2016-08-24.
  63. ^ S. Habib; V. Morozov; H. Finkel; A. Pope; K. Heitmann; K. Kumaran; T. Peterka; J. Insley; D. Daniel; P. Fasel; N. Frontiere & Z. Lukic (2012). "The Universe at Extreme Scale: Multi-Petaflop Sky Simulation on the BG/Q". arXiv:1211.4864 [cs.DC].
  64. ^ "Cardioid Cardiac Modeling Project". Researcher.watson.ibm.com. 25 July 2016. Retrieved 13 October 2017.
  65. ^ . Str.llnl.gov. Archived from the original on 14 February 2013. Retrieved 13 October 2017.
  66. ^ Rossinelli, Diego; Hejazialhosseini, Babak; Hadjidoukas, Panagiotis; Bekas, Costas; Curioni, Alessandro; Bertsch, Adam; Futral, Scott; Schmidt, Steffen J.; Adams, Nikolaus A.; Koumoutsakos, Petros (17 November 2013). "11 PFLOP/S simulations of cloud cavitation collapse". Proceedings of the International Conference on High Performance Computing, Networking, Storage and Analysis. SC '13. pp. 1–13. doi:10.1145/2503210.2504565. ISBN 9781450323789. S2CID 12651650.

External links edit

  • IBM Research: Blue Gene
Records
Preceded by
NEC Earth Simulator
35.86 teraflops
World's most powerful supercomputer
(Blue Gene/L)

November 2004 – November 2007
Succeeded by
IBM Roadrunner
1.026 petaflops

blue, gene, this, article, about, supercomputer, musician, blue, gene, tyranny, albums, blue, gene, gene, ammons, album, blue, gene, gene, pitney, album, this, article, need, rewritten, comply, with, wikipedia, quality, standards, help, talk, page, contain, su. This article is about the supercomputer For the musician see Blue Gene Tyranny For the albums see Blue Gene Gene Ammons album and Blue Gene Gene Pitney album This article may need to be rewritten to comply with Wikipedia s quality standards You can help The talk page may contain suggestions December 2011 Blue Gene was an IBM project aimed at designing supercomputers that can reach operating speeds in the petaFLOPS PFLOPS range with low power consumption IBM Blue GeneA Blue Gene P supercomputer at Argonne National LaboratoryDeveloperIBMTypeSupercomputer platformRelease dateBG L Feb 1999 Feb 1999 BG P June 2007BG Q Nov 2011Discontinued2015 2015 CPUBG L PowerPC 440BG P PowerPC 450BG Q PowerPC A2PredecessorIBM RS 6000 SP QCDOCSuccessorIBM PERCSHierarchy of Blue Gene processing unitsThe project created three generations of supercomputers Blue Gene L Blue Gene P and Blue Gene Q During their deployment Blue Gene systems often led the TOP500 1 and Green500 2 rankings of the most powerful and most power efficient supercomputers respectively Blue Gene systems have also consistently scored top positions in the Graph500 list 3 The project was awarded the 2009 National Medal of Technology and Innovation 4 As of 2015 IBM appears to have ended development of the Blue Gene family though no formal announcement has been made 5 IBM has since focused its supercomputer efforts on the OpenPower platform using accelerators such as FPGAs and GPUs to address the diminishing returns of Moore s law 6 Contents 1 History 1 1 The name 1 2 Major features 1 3 Architecture 2 Blue Gene P 2 1 Design 2 2 Installations 2 3 Applications 3 Blue Gene Q 3 1 Design 3 2 Performance 3 3 Installations 3 4 Applications 4 See also 5 References 6 External linksHistory editIn December 1999 IBM announced a US 100 million research initiative for a five year effort to build a massively parallel computer to be applied to the study of biomolecular phenomena such as protein folding 7 The project had two main goals to advance our understanding of the mechanisms behind protein folding via large scale simulation and to explore novel ideas in massively parallel machine architecture and software Major areas of investigation included how to use this novel platform to effectively meet its scientific goals how to make such massively parallel machines more usable and how to achieve performance targets at a reasonable cost through novel machine architectures The initial design for Blue Gene was based on an early version of the Cyclops64 architecture designed by Monty Denneau The initial research and development work was pursued at IBM T J Watson Research Center and led by William R Pulleyblank 8 At IBM Alan Gara started working on an extension of the QCDOC architecture into a more general purpose supercomputer The 4D nearest neighbor interconnection network was replaced by a network supporting routing of messages from any node to any other and a parallel I O subsystem was added DOE started funding the development of this system and it became known as Blue Gene L L for Light development of the original Blue Gene system continued under the name Blue Gene C C for Cyclops and later Cyclops64 In November 2004 a 16 rack system with each rack holding 1 024 compute nodes achieved first place in the TOP500 list with a Linpack performance of 70 72 TFLOPS 1 It thereby overtook NEC s Earth Simulator which had held the title of the fastest computer in the world since 2002 From 2004 through 2007 the Blue Gene L installation at LLNL 9 gradually expanded to 104 racks achieving 478 TFLOPS Linpack and 596 TFLOPS peak The LLNL BlueGene L installation held the first position in the TOP500 list for 3 5 years until in June 2008 it was overtaken by IBM s Cell based Roadrunner system at Los Alamos National Laboratory which was the first system to surpass the 1 PetaFLOPS mark The system was built in Rochester MN IBM plant While the LLNL installation was the largest Blue Gene L installation many smaller installations followed In November 2006 there were 27 computers on the TOP500 list using the Blue Gene L architecture All these computers were listed as having an architecture of eServer Blue Gene Solution For example three racks of Blue Gene L were housed at the San Diego Supercomputer Center While the TOP500 measures performance on a single benchmark application Linpack Blue Gene L also set records for performance on a wider set of applications Blue Gene L was the first supercomputer ever to run over 100 TFLOPS sustained on a real world application namely a three dimensional molecular dynamics code ddcMD simulating solidification nucleation and growth processes of molten metal under high pressure and temperature conditions This achievement won the 2005 Gordon Bell Prize In June 2006 NNSA and IBM announced that Blue Gene L achieved 207 3 TFLOPS on a quantum chemical application Qbox 10 At Supercomputing 2006 11 Blue Gene L was awarded the winning prize in all HPC Challenge Classes of awards 12 In 2007 a team from the IBM Almaden Research Center and the University of Nevada ran an artificial neural network almost half as complex as the brain of a mouse for the equivalent of a second the network was run at 1 10 of normal speed for 10 seconds 13 The name edit The name Blue Gene comes from what it was originally designed to do help biologists understand the processes of protein folding and gene development 14 Blue is a traditional moniker that IBM uses for many of its products and the company itself The original Blue Gene design was renamed Blue Gene C and eventually Cyclops64 The L in Blue Gene L comes from Light as that design s original name was Blue Light The P version was designed to be a petascale design Q is just the letter after P There is no Blue Gene R 15 Major features edit The Blue Gene L supercomputer was unique in the following aspects 16 Trading the speed of processors for lower power consumption Blue Gene L used low frequency and low power embedded PowerPC cores with floating point accelerators While the performance of each chip was relatively low the system could achieve better power efficiency for applications that could use large numbers of nodes Dual processors per node with two working modes co processor mode where one processor handles computation and the other handles communication and virtual node mode where both processors are available to run user code but the processors share both the computation and the communication load System on a chip design Components were embedded on a single chip for each node with the exception of 512 MB external DRAM A large number of nodes scalable in increments of 1024 up to at least 65 536 Three dimensional torus interconnect with auxiliary networks for global communications broadcast and reductions I O and management Lightweight OS per node for minimum system overhead system noise Architecture edit The Blue Gene L architecture was an evolution of the QCDSP and QCDOC architectures Each Blue Gene L Compute or I O node was a single ASIC with associated DRAM memory chips The ASIC integrated two 700 MHz PowerPC 440 embedded processors each with a double pipeline double precision Floating Point Unit FPU a cache sub system with built in DRAM controller and the logic to support multiple communication sub systems The dual FPUs gave each Blue Gene L node a theoretical peak performance of 5 6 GFLOPS gigaFLOPS The two CPUs were not cache coherent with one another Compute nodes were packaged two per compute card with 16 compute cards plus up to 2 I O nodes per node board There were 32 node boards per cabinet rack 17 By the integration of all essential sub systems on a single chip and the use of low power logic each Compute or I O node dissipated low power about 17 watts including DRAMs This allowed aggressive packaging of up to 1024 compute nodes plus additional I O nodes in a standard 19 inch rack within reasonable limits of electrical power supply and air cooling The performance metrics in terms of FLOPS per watt FLOPS per m2 of floorspace and FLOPS per unit cost allowed scaling up to very high performance With so many nodes component failures were inevitable The system was able to electrically isolate faulty components down to a granularity of half a rack 512 compute nodes to allow the machine to continue to run Each Blue Gene L node was attached to three parallel communications networks a 3D toroidal network for peer to peer communication between compute nodes a collective network for collective communication broadcasts and reduce operations and a global interrupt network for fast barriers The I O nodes which run the Linux operating system provided communication to storage and external hosts via an Ethernet network The I O nodes handled filesystem operations on behalf of the compute nodes Finally a separate and private Ethernet network provided access to any node for configuration booting and diagnostics To allow multiple programs to run concurrently a Blue Gene L system could be partitioned into electronically isolated sets of nodes The number of nodes in a partition had to be a positive integer power of 2 with at least 25 32 nodes To run a program on Blue Gene L a partition of the computer was first to be reserved The program was then loaded and run on all the nodes within the partition and no other program could access nodes within the partition while it was in use Upon completion the partition nodes were released for future programs to use Blue Gene L compute nodes used a minimal operating system supporting a single user program Only a subset of POSIX calls was supported and only one process could run at a time on node in co processor mode or one process per CPU in virtual mode Programmers needed to implement green threads in order to simulate local concurrency Application development was usually performed in C C or Fortran using MPI for communication However some scripting languages such as Ruby 18 and Python 19 have been ported to the compute nodes IBM published BlueMatter the application developed to exercise Blue Gene L as open source here 20 This serves to document how the torus and collective interfaces were used by applications and may serve as a base for others to exercise the current generation of supercomputers Blue Gene P edit nbsp A Blue Gene P node card nbsp A schematic overview of a Blue Gene P supercomputerIn June 2007 IBM unveiled Blue Gene P the second generation of the Blue Gene series of supercomputers and designed through a collaboration that included IBM LLNL and Argonne National Laboratory s Leadership Computing Facility 21 Design edit The design of Blue Gene P is a technology evolution from Blue Gene L Each Blue Gene P Compute chip contains four PowerPC 450 processor cores running at 850 MHz The cores are cache coherent and the chip can operate as a 4 way symmetric multiprocessor SMP The memory subsystem on the chip consists of small private L2 caches a central shared 8 MB L3 cache and dual DDR2 memory controllers The chip also integrates the logic for node to node communication using the same network topologies as Blue Gene L but at more than twice the bandwidth A compute card contains a Blue Gene P chip with 2 or 4 GB DRAM comprising a compute node A single compute node has a peak performance of 13 6 GFLOPS 32 Compute cards are plugged into an air cooled node board A rack contains 32 node boards thus 1024 nodes 4096 processor cores 22 By using many small low power densely packaged chips Blue Gene P exceeded the power efficiency of other supercomputers of its generation and at 371 MFLOPS W Blue Gene P installations ranked at or near the top of the Green500 lists in 2007 2008 2 Installations edit The following is an incomplete list of Blue Gene P installations Per November 2009 the TOP500 list contained 15 Blue Gene P installations of 2 racks 2048 nodes 8192 processor cores 23 86 TFLOPS Linpack and larger 1 On November 12 2007 the first Blue Gene P installation JUGENE with 16 racks 16 384 nodes 65 536 processors was running at Forschungszentrum Julich in Germany with a performance of 167 TFLOPS 23 When inaugurated it was the fastest supercomputer in Europe and the sixth fastest in the world In 2009 JUGENE was upgraded to 72 racks 73 728 nodes 294 912 processor cores with 144 terabytes of memory and 6 petabytes of storage and achieved a peak performance of 1 PetaFLOPS This configuration incorporated new air to water heat exchangers between the racks reducing the cooling cost substantially 24 JUGENE was shut down in July 2012 and replaced by the Blue Gene Q system JUQUEEN The 40 rack 40960 nodes 163840 processor cores Intrepid system at Argonne National Laboratory was ranked 3 on the June 2008 Top 500 list 25 The Intrepid system is one of the major resources of the INCITE program in which processor hours are awarded to grand challenge science and engineering projects in a peer reviewed competition Lawrence Livermore National Laboratory installed a 36 rack Blue Gene P installation Dawn in 2009 The King Abdullah University of Science and Technology KAUST installed a 16 rack Blue Gene P installation Shaheen in 2009 In 2012 a 6 rack Blue Gene P was installed at Rice University and will be jointly administered with the University of Sao Paulo 26 A 2 5 rack Blue Gene P system is the central processor for the Low Frequency Array for Radio astronomy LOFAR project in the Netherlands and surrounding European countries This application uses the streaming data capabilities of the machine A 2 rack Blue Gene P was installed in September 2008 in Sofia Bulgaria and is operated by the Bulgarian Academy of Sciences and Sofia University 27 In 2010 a 2 rack 8192 core Blue Gene P was installed at the University of Melbourne for the Victorian Life Sciences Computation Initiative 28 In 2011 a 2 rack Blue Gene P was installed at University of Canterbury in Christchurch New Zealand In 2012 a 2 rack Blue Gene P was installed at Rutgers University in Piscataway New Jersey It was dubbed Excalibur as an homage to the Rutgers mascot the Scarlet Knight 29 In 2008 a 1 rack 1024 nodes Blue Gene P with 180 TB of storage was installed at the University of Rochester in Rochester New York 30 The first Blue Gene P in the ASEAN region was installed in 2010 at the Universiti of Brunei Darussalam s research centre the UBD IBM Centre The installation has prompted research collaboration between the university and IBM research on climate modeling that will investigate the impact of climate change on flood forecasting crop yields renewable energy and the health of rainforests in the region among others 31 In 2013 a 1 rack Blue Gene P was donated to the Department of Science and Technology for weather forecasts disaster management precision agriculture and health it is housed in the National Computer Center Diliman Quezon City under the auspices of Philippine Genome Center PGC Core Facility for Bioinformatics CFB at UP Diliman Quezon City 32 Applications edit Veselin Topalov the challenger to the World Chess Champion title in 2010 confirmed in an interview that he had used a Blue Gene P supercomputer during his preparation for the match 33 The Blue Gene P computer has been used to simulate approximately one percent of a human cerebral cortex containing 1 6 billion neurons with approximately 9 trillion connections 34 The IBM Kittyhawk project team has ported Linux to the compute nodes and demonstrated generic Web 2 0 workloads running at scale on a Blue Gene P Their paper published in the ACM Operating Systems Review describes a kernel driver that tunnels Ethernet over the tree network which results in all to all TCP IP connectivity 35 36 Running standard Linux software like MySQL their performance results on SpecJBB rank among the highest on record citation needed In 2011 a Rutgers University IBM University of Texas team linked the KAUST Shaheen installation together with a Blue Gene P installation at the IBM Watson Research Center into a federated high performance computing cloud winning the IEEE SCALE 2011 challenge with an oil reservoir optimization application 37 Blue Gene Q edit nbsp The IBM Blue Gene Q installed at Argonne National Laboratory near Chicago IllinoisThe third supercomputer design in the Blue Gene series Blue Gene Q has a peak performance of 20 Petaflops 38 reaching LINPACK benchmarks performance of 17 Petaflops Blue Gene Q continues to expand and enhance the Blue Gene L and P architectures Design edit The Blue Gene Q Compute chip is an 18 core chip The 64 bit A2 processor cores are 4 way simultaneously multithreaded and run at 1 6 GHz Each processor core has a SIMD quad vector double precision floating point unit IBM QPX 16 Processor cores are used for computing and a 17th core for operating system assist functions such as interrupts asynchronous I O MPI pacing and RAS The 18th core is used as a redundant spare used to increase manufacturing yield The spared out core is shut down in functional operation The processor cores are linked by a crossbar switch to a 32 MB eDRAM L2 cache operating at half core speed The L2 cache is multi versioned supporting transactional memory and speculative execution and has hardware support for atomic operations 39 L2 cache misses are handled by two built in DDR3 memory controllers running at 1 33 GHz The chip also integrates logic for chip to chip communications in a 5D torus configuration with 2GB s chip to chip links The Blue Gene Q chip is manufactured on IBM s copper SOI process at 45 nm It delivers a peak performance of 204 8 GFLOPS at 1 6 GHz drawing about 55 watts The chip measures 19 19 mm 359 5 mm and comprises 1 47 billion transistors The chip is mounted on a compute card along with 16 GB DDR3 DRAM i e 1 GB for each user processor core 40 A Q32 41 compute drawer contains 32 compute cards each water cooled 42 A midplane crate contains 16 Q32 compute drawers for a total of 512 compute nodes electrically interconnected in a 5D torus configuration 4x4x4x4x2 Beyond the midplane level all connections are optical Racks have two midplanes thus 32 compute drawers for a total of 1024 compute nodes 16 384 user cores and 16 TB RAM 42 Separate I O drawers placed at the top of a rack or in a separate rack are air cooled and contain 8 compute cards and 8 PCIe expansion slots for InfiniBand or 10 Gigabit Ethernet networking 42 Performance edit At the time of the Blue Gene Q system announcement in November 2011 an initial 4 rack Blue Gene Q system 4096 nodes 65536 user processor cores achieved 17 in the TOP500 list 1 with 677 1 TeraFLOPS Linpack outperforming the original 2007 104 rack BlueGene L installation described above The same 4 rack system achieved the top position in the Graph500 list 3 with over 250 GTEPS giga traversed edges per second Blue Gene Q systems also topped the Green500 list of most energy efficient supercomputers with up to 2 1 GFLOPS W 2 In June 2012 Blue Gene Q installations took the top positions in all three lists TOP500 1 Graph500 3 and Green500 2 Installations edit The following is an incomplete list of Blue Gene Q installations Per June 2012 the TOP500 list contained 20 Blue Gene Q installations of 1 2 rack 512 nodes 8192 processor cores 86 35 TFLOPS Linpack and larger 1 At a size independent power efficiency of about 2 1 GFLOPS W all these systems also populated the top of the June 2012 Green 500 list 2 A Blue Gene Q system called Sequoia was delivered to the Lawrence Livermore National Laboratory LLNL beginning in 2011 and was fully deployed in June 2012 It is part of the Advanced Simulation and Computing Program running nuclear simulations and advanced scientific research It consists of 96 racks comprising 98 304 compute nodes with 1 6 million processor cores and 1 6 PB of memory covering an area of about 3 000 square feet 280 m2 43 In June 2012 the system was ranked as the world s fastest supercomputer 44 45 at 20 1 PFLOPS peak 16 32 PFLOPS sustained Linpack drawing up to 7 9 megawatts of power 1 In June 2013 its performance is listed at 17 17 PFLOPS sustained Linpack 1 A 10 PFLOPS peak Blue Gene Q system called Mira was installed at Argonne National Laboratory in the Argonne Leadership Computing Facility in 2012 It consist of 48 racks 49 152 compute nodes with 70 PB of disk storage 470 GB s I O bandwidth 46 47 JUQUEEN at the Forschungzentrum Julich is a 28 rack Blue Gene Q system and was from June 2013 to November 2015 the highest ranked machine in Europe in the Top500 1 Vulcan at Lawrence Livermore National Laboratory LLNL is a 24 rack 5 PFLOPS peak Blue Gene Q system that was commissioned in 2012 and decommissioned in 2019 48 Vulcan served Lab industry projects through Livermore s High Performance Computing HPC Innovation Center 49 as well as academic collaborations in support of DOE National Nuclear Security Administration NNSA missions 50 Fermi at the CINECA Supercomputing facility Bologna Italy 51 is a 10 rack 2 PFLOPS peak Blue Gene Q system As part of DiRAC the EPCC hosts a 6 rack 6144 node Blue Gene Q system at the University of Edinburgh 52 A five rack Blue Gene Q system with additional compute hardware called AMOS was installed at Rensselaer Polytechnic Institute in 2013 53 The system was rated at 1048 6 teraflops the most powerful supercomputer at any private university and third most powerful supercomputer among all universities in 2014 54 An 838 TFLOPS peak Blue Gene Q system called Avoca was installed at the Victorian Life Sciences Computation Initiative in June 2012 55 This system is part of a collaboration between IBM and VLSCI with the aims of improving diagnostics finding new drug targets refining treatments and furthering our understanding of diseases 56 The system consists of 4 racks with 350 TB of storage 65 536 cores 64 TB RAM 57 A 209 TFLOPS peak Blue Gene Q system was installed at the University of Rochester in July 2012 58 This system is part of the Health Sciences Center for Computational Innovation Archived 2012 10 19 at the Wayback Machine which is dedicated to the application of high performance computing to research programs in the health sciences The system consists of a single rack 1 024 compute nodes with 400 TB of high performance storage 59 A 209 TFLOPS peak 172 TFLOPS LINPACK Blue Gene Q system called Lemanicus was installed at the EPFL in March 2013 60 This system belongs to the Center for Advanced Modeling Science CADMOS 61 which is a collaboration between the three main research institutions on the shore of the Lake Geneva in the French speaking part of Switzerland University of Lausanne University of Geneva and EPFL The system consists of a single rack 1 024 compute nodes with 2 1 PB of IBM GPFS GSS storage A half rack Blue Gene Q system with about 100 TFLOPS peak called Cumulus was installed at A STAR Computational Resource Centre Singapore at early 2011 62 Applications edit Record breaking science applications have been run on the BG Q the first to cross 10 petaflops of sustained performance The cosmology simulation framework HACC achieved almost 14 petaflops with a 3 6 trillion particle benchmark run 63 while the Cardioid code 64 65 which models the electrophysiology of the human heart achieved nearly 12 petaflops with a near real time simulation both on Sequoia A fully compressible flow solver has also achieved 14 4 PFLOP s originally 11 PFLOP s on Sequoia 72 of the machine s nominal peak performance 66 See also editCNK operating system INK operating system Deep Blue chess computer References edit a b c d e f g h i November 2004 TOP500 Supercomputer Sites Top500 org Retrieved 13 December 2019 a b c d e Green500 TOP500 Supercomputer Sites Green500 org Archived from the original on 26 August 2016 Retrieved 13 October 2017 a b c The Graph500 List Archived from the original on 2011 12 27 Harris Mark September 18 2009 Obama honours IBM supercomputer Techradar com Retrieved 2009 09 18 Supercomputing Strategy Shifts in a World Without BlueGene Nextplatform com 14 April 2015 Retrieved 13 October 2017 IBM to Build DoE s Next Gen Coral Supercomputers EE Times EETimes Archived from the original on 30 April 2017 Retrieved 13 October 2017 Blue Gene A Vision for Protein Science using a Petaflop Supercomputer PDF IBM Systems Journal 40 2 2017 10 23 A Talk with the Brain behind Blue Gene BusinessWeek November 6 2001 archived from the original on December 11 2014 BlueGene L Archived from the original on 2011 07 18 Retrieved 2007 10 05 hpcwire com Archived from the original on September 28 2007 SC06 sc06 supercomputing org Retrieved 13 October 2017 HPC Challenge Award Competition Archived from the original on 2006 12 11 Retrieved 2006 12 03 Mouse brain simulated on computer BBC News April 27 2007 Archived from the original on 2007 05 25 IBM100 Blue Gene 03 ibm com 7 March 2012 Retrieved 13 October 2017 Kunkel Julian M Ludwig Thomas Meuer Hans 12 June 2013 Supercomputing 28th International Supercomputing Conference ISC 2013 Leipzig Germany June 16 20 2013 Proceedings Springer ISBN 9783642387500 Retrieved 13 October 2017 via Google Books Blue Gene IBM Journal of Research and Development 49 2 3 2005 Kissel Lynn BlueGene L Configuration asc llnl gov Archived from the original on 17 February 2013 Retrieved 13 October 2017 Compute Node Ruby for Bluegene L www ece iastate edu Archived from the original on February 11 2009 William Scullin March 12 2011 Python for High Performance Computing Atlanta GA Blue Matter source code retrieved February 28 2020 IBM Triples Performance of World s Fastest Most Energy Efficient Supercomputer 2007 06 27 Retrieved 2011 12 24 Overview of the IBM Blue Gene P project IBM Journal of Research and Development 52 199 220 Jan 2008 doi 10 1147 rd 521 0199 Supercomputing Julich Amongst World Leaders Again IDG News Service 2007 11 12 IBM Press room 2009 02 10 New IBM Petaflop Supercomputer at German Forschungszentrum Juelich to Be Europe s Most Powerful 03 ibm com 2009 02 10 Retrieved 2011 03 11 Argonne s Supercomputer Named World s Fastest for Open Science Third Overall Mcs anl gov Archived from the original on 8 February 2009 Retrieved 13 October 2017 Rice University IBM partner to bring first Blue Gene supercomputer to Texas news rice edu Archived from the original on 2012 04 05 Retrieved 2012 04 01 Veche si imame i superkompyutr Archived 2009 12 23 at the Wayback Machine Dir bg 9 September 2008 IBM Press room 2010 02 11 IBM to Collaborate with Leading Australian Institutions to Push the Boundaries of Medical Research Australia 03 ibm com 2010 02 11 Retrieved 2011 03 11 Rutgers Gets Big Data Weapon in IBM Supercomputer Hardware Archived from the original on 2013 03 06 Retrieved 2013 09 07 University of Rochester and IBM Expand Partnership in Pursuit of New Frontiers in Health University of Rochester Medical Center May 11 2012 Archived from the original on 2012 05 11 IBM and Universiti Brunei Darussalam to Collaborate on Climate Modeling Research IBM News Room 2010 10 13 Retrieved 18 October 2012 Ronda Rainier Allan DOST s supercomputer for scientists now operational Philstar com Retrieved 13 October 2017 Topalov training with super computer Blue Gene P Players chessdo com Archived from the original on 19 May 2013 Retrieved 13 October 2017 Kaku Michio Physics of the Future New York Doubleday 2011 91 Project Kittyhawk A Global Scale Computer Research ibm com Retrieved 13 October 2017 Appavoo Jonathan Uhlig Volkmar Waterland Amos Project Kittyhawk Building a Global Scale Computer PDF Yorktown Heights NY IBM T J Watson Research Center Archived from the original on 2008 10 31 Retrieved 2018 03 13 a href Template Cite web html title Template Cite web cite web a CS1 maint bot original URL status unknown link Rutgers led Experts Assemble Globe Spanning Supercomputer Cloud News rutgers edu 2011 07 06 Archived from the original on 2011 11 10 Retrieved 2011 12 24 IBM announces 20 petaflops supercomputer Kurzweil 18 November 2011 Retrieved 13 November 2012 IBM has announced the Blue Gene Q supercomputer with peak performance of 20 petaflops Memory Speculation of the Blue Gene Q Compute Chip Retrieved 2011 12 23 The Blue Gene Q Compute chip PDF Archived from the original PDF on 2015 04 29 Retrieved 2011 12 23 IBM Blue Gene Q supercomputer delivers petascale computing for high performance computing applications PDF 01 ibm com Retrieved 13 October 2017 a b c IBM uncloaks 20 petaflops BlueGene Q super The Register 2010 11 22 Retrieved 2010 11 25 Feldman Michael 2009 02 03 Lawrence Livermore Prepares for 20 Petaflop Blue Gene Q HPCwire Archived from the original on 2009 02 12 Retrieved 2011 03 11 B Johnston Donald 2012 06 18 NNSA s Sequoia supercomputer ranked as world s fastest Archived from the original on 2014 09 02 Retrieved 2012 06 23 TOP500 Press Release Archived from the original on June 24 2012 MIRA World s fastest supercomputer Argonne Leadership Computing Facility Alcf anl gov Retrieved 13 October 2017 Mira Argonne Leadership Computing Facility Alcf anl gov Retrieved 13 October 2017 Vulcan decommissioned hpc llnl gov Retrieved 10 April 2019 HPC Innovation Center hpcinnovationcenter llnl gov Retrieved 13 October 2017 Lawrence Livermore s Vulcan brings 5 petaflops computing power to collaborations with industry and academia to advance science and technology Llnl gov 11 June 2013 Archived from the original on 9 December 2013 Retrieved 13 October 2017 Ibm Fermi Scai Archived from the original on 2013 10 30 Retrieved 2013 05 13 DiRAC BlueGene Q epcc ed ac uk Rensselaer at Petascale AMOS Among the World s Fastest and Most Powerful Supercomputers News rpi edu Retrieved 13 October 2017 Michael Mullaneyvar AMOS Ranks 1st Among Supercomputers at Private American Universities News rpi edi Retrieved 13 October 2017 World s greenest supercomputer comes to Melbourne The Melbourne Engineer Themelbourneengineer eng unimelb edu au 16 February 2012 Archived from the original on 2 October 2017 Retrieved 13 October 2017 Melbourne Bioinformatics For all researchers and students based in Melbourne s biomedical and bioscience research precinct Melbourne Bioinformatics Retrieved 13 October 2017 Access to High end Systems Melbourne Bioinformatics Vlsci org au Retrieved 13 October 2017 University of Rochester Inaugurates New Era of Health Care Research Rochester edu Retrieved 13 October 2017 Resources Center for Integrated Research Computing Circ rochester edu Retrieved 13 October 2017 EPFL BlueGene L Homepage Archived from the original on 2007 12 10 Retrieved 2021 03 10 Utilisateur Super A propos Cadmos org Archived from the original on 10 January 2016 Retrieved 13 October 2017 A STAR Computational Resource Centre Acrc a star edu sg Archived from the original on 2016 12 20 Retrieved 2016 08 24 S Habib V Morozov H Finkel A Pope K Heitmann K Kumaran T Peterka J Insley D Daniel P Fasel N Frontiere amp Z Lukic 2012 The Universe at Extreme Scale Multi Petaflop Sky Simulation on the BG Q arXiv 1211 4864 cs DC Cardioid Cardiac Modeling Project Researcher watson ibm com 25 July 2016 Retrieved 13 October 2017 Venturing into the Heart of High Performance Computing Simulations Str llnl gov Archived from the original on 14 February 2013 Retrieved 13 October 2017 Rossinelli Diego Hejazialhosseini Babak Hadjidoukas Panagiotis Bekas Costas Curioni Alessandro Bertsch Adam Futral Scott Schmidt Steffen J Adams Nikolaus A Koumoutsakos Petros 17 November 2013 11 PFLOP S simulations of cloud cavitation collapse Proceedings of the International Conference on High Performance Computing Networking Storage and Analysis SC 13 pp 1 13 doi 10 1145 2503210 2504565 ISBN 9781450323789 S2CID 12651650 External links editIBM Research Blue Gene Next generation supercomputers Blue Gene P overview pdf RecordsPreceded byNEC Earth Simulator35 86 teraflops World s most powerful supercomputer Blue Gene L November 2004 November 2007 Succeeded byIBM Roadrunner1 026 petaflops Retrieved from https en wikipedia org w index php title IBM Blue Gene amp oldid 1182352861, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.