fbpx
Wikipedia

Pico (supercomputer)

PICO is an Intel Cluster installed in the data center of Cineca. PICO is intended to enable new "BigData" classes of applications, related to the management and processing of large quantities of data, coming both from simulations and experiments. The cluster is made of an Intel NeXtScale server, designed to optimize density and performance, driving a large data repository shared among all the HPC systems in Cineca.

Pico
Activeoperational 2015
SponsorsMinistry of Education, Universities and Research (Italy)
OperatorsThe Members of the Consortium [1]
LocationCineca, Casalecchio di Reno, Italy
ArchitectureIBM NeXtScale
Linux Infiniband Cluster
Compute/login nodes 66, Intel Xeon E5 2670 v2 #2.5Ghz, 20 cores, 128 GB ram
Visualization nodes 2, Intel Xeon E5 2670 v2 # 2.5Ghz, 20 cores, 128 GB Ram, 2 GPU Nvidia K40
Big Mem nodes 2,Intel Xeon E5 2650 v2 # 2.6 Ghz, 16 cores, 512 GB Ram,1 GPU Nvidia K20
BigInsight nodes 4, Intel Xeon E5 2650 v2 # 2.6 Ghz, 16 cores, 64 GB Ram, 32TB of local disk.
Memory128 GB/Compute node 1080 (2 viz nodes with 512GB)
Storagehigh throughput disks (based on GSS technology) for a total amount of about 4 PB, connected with a large capacity tape library for a total actual amount of 12 PByte (expandible to 16 PByte).
PurposeBig data
Websitewww.hpc.cineca.it/content/pico-user-guide

History edit

 
Luci e cavi del Supercomputer Pico in Cineca

The development of Pico was sponsored by the Ministry of Education, Universities and Research (Italy) in 2015.

Specifications edit

PICO is an Intel Cluster made of 74 nodes of different types, devoted to different purposes, with the common task of data analytics and visualization on large amount of data.

  • Login nodes: 2 × 20-core nodes, 128 GB-mem. Both of this two nodes are reachable with the IP: login.pico.cineca.it
  • Compute Nodes: 51 × 20-core nodes, 128 GB-mem. A standard scientific computational environment is defined here. Pre-installed applications are in the visualization domain, as well as data analysis, post-processing and bioinformatics. It is possible to access this environment via ssh and submit large analysis in a PBS batch environment.
  • Big memory Nodes: two nodes, big1 and big2, equipped respectively with 32 cores – 0.5 T and 40 cores – 1 T of RAM are available for specific activities, which require a remarkable quantity of memory. Both are HP DL980 servers.
  • The big1 node is equipped with 8 Quad-Core Intel Xeon E7520 processors with a clock of 1.87 GHz and 512 GB of RAM, and a NVidia Quadro 6000 graphics card.
  • The big2 node is equipped with 4 Ten-Core Intel Xeon E7-2860 with a clock of 2.26 GHz and 1024 GB of RAM.
  • Viz nodes: 2 × (20-core, 128 GB-mem, 2 × GPU Nvidia K40) + 2 × (16-core, 512 GB-mem, 1 × GPU Nvidia K20). A remote visualization environment is defined on this partition, taking advantage from the large memory and the GPU acceleration.
  • BigInsights nodes: 4 × 16-core nodes, 64 GB-mem/node, 32TB local disks/node + 1 × 20-core nodes, 128 GB-mem.). On these nodes an IBM solution for Hadoop applications is available. InfoSphere BigInsights is available for special projects.
  • Other nodes: 13 × 20-core nodes, 128 GB-mem. They are used for internal activities in the domain of Cloud computing, large Scientific Databases, hadoop for science.

See also edit

References edit

  1. ^ "Consortium of universities". Retrieved 9 March 2016.

Articles about Pico and its network edit

  • Cineca: dal supercomputing alla gestione dei big data (Luca De Biase)
  • L’utilizzo dei Big Data in Istat: stato attuale e prospettive (presentation at ForumPA by Giulio Barcaroli - ISTAT)
  • Centro internazionale di fisica teorica Abdus SalamSymposium on HPC & Data-Intensive Applications in Earth Science (presentation by Carlo Cavazzoni and Giuseppe Fiameni - Cineca)

pico, supercomputer, other, uses, pico, disambiguation, pico, intel, cluster, installed, data, center, cineca, pico, intended, enable, bigdata, classes, applications, related, management, processing, large, quantities, data, coming, both, from, simulations, ex. For other uses see Pico disambiguation PICO is an Intel Cluster installed in the data center of Cineca PICO is intended to enable new BigData classes of applications related to the management and processing of large quantities of data coming both from simulations and experiments The cluster is made of an Intel NeXtScale server designed to optimize density and performance driving a large data repository shared among all the HPC systems in Cineca PicoActiveoperational 2015SponsorsMinistry of Education Universities and Research Italy OperatorsThe Members of the Consortium 1 LocationCineca Casalecchio di Reno ItalyArchitectureIBM NeXtScaleLinux Infiniband ClusterCompute login nodes 66 Intel Xeon E5 2670 v2 2 5Ghz 20 cores 128 GB ramVisualization nodes 2 Intel Xeon E5 2670 v2 2 5Ghz 20 cores 128 GB Ram 2 GPU Nvidia K40Big Mem nodes 2 Intel Xeon E5 2650 v2 2 6 Ghz 16 cores 512 GB Ram 1 GPU Nvidia K20BigInsight nodes 4 Intel Xeon E5 2650 v2 2 6 Ghz 16 cores 64 GB Ram 32TB of local disk Memory128 GB Compute node 1080 2 viz nodes with 512GB Storagehigh throughput disks based on GSS technology for a total amount of about 4 PB connected with a large capacity tape library for a total actual amount of 12 PByte expandible to 16 PByte PurposeBig dataWebsitewww wbr hpc wbr cineca wbr it wbr content wbr pico user guide Contents 1 History 2 Specifications 3 See also 4 References 5 Articles about Pico and its networkHistory edit nbsp Luci e cavi del Supercomputer Pico in CinecaThe development of Pico was sponsored by the Ministry of Education Universities and Research Italy in 2015 Specifications editPICO is an Intel Cluster made of 74 nodes of different types devoted to different purposes with the common task of data analytics and visualization on large amount of data Login nodes 2 20 core nodes 128 GB mem Both of this two nodes are reachable with the IP login pico cineca it Compute Nodes 51 20 core nodes 128 GB mem A standard scientific computational environment is defined here Pre installed applications are in the visualization domain as well as data analysis post processing and bioinformatics It is possible to access this environment via ssh and submit large analysis in a PBS batch environment Big memory Nodes two nodes big1 and big2 equipped respectively with 32 cores 0 5 T and 40 cores 1 T of RAM are available for specific activities which require a remarkable quantity of memory Both are HP DL980 servers The big1 node is equipped with 8 Quad Core Intel Xeon E7520 processors with a clock of 1 87 GHz and 512 GB of RAM and a NVidia Quadro 6000 graphics card The big2 node is equipped with 4 Ten Core Intel Xeon E7 2860 with a clock of 2 26 GHz and 1024 GB of RAM Viz nodes 2 20 core 128 GB mem 2 GPU Nvidia K40 2 16 core 512 GB mem 1 GPU Nvidia K20 A remote visualization environment is defined on this partition taking advantage from the large memory and the GPU acceleration BigInsights nodes 4 16 core nodes 64 GB mem node 32TB local disks node 1 20 core nodes 128 GB mem On these nodes an IBM solution for Hadoop applications is available InfoSphere BigInsights is available for special projects Other nodes 13 20 core nodes 128 GB mem They are used for internal activities in the domain of Cloud computing large Scientific Databases hadoop for science See also editSupercomputing in EuropeReferences edit Consortium of universities Retrieved 9 March 2016 Articles about Pico and its network editCineca dal supercomputing alla gestione dei big data Luca De Biase L utilizzo dei Big Data in Istat stato attuale e prospettive presentation at ForumPA by Giulio Barcaroli ISTAT Centro internazionale di fisica teorica Abdus SalamSymposium on HPC amp Data Intensive Applications in Earth Science presentation by Carlo Cavazzoni and Giuseppe Fiameni Cineca Retrieved from https en wikipedia org w index php title Pico supercomputer amp oldid 1065339854, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.