fbpx
Wikipedia

AI accelerator

An AI accelerator, deep learning processor, or neural processing unit is a class of specialized hardware accelerator[1] or computer system[2][3] designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks.[4] They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFET transistors.[5]

AI accelerators are used in mobile devices, such as neural processing units (NPUs) in Apple iPhones[6] or Huawei cellphones,[7] and personal computers such as Apple silicon Macs, to cloud computing servers such as tensor processing units (TPU) in the Google Cloud Platform.[8] A number of vendor-specific terms exist for devices in this category, and it is an emerging technology without a dominant design.

Graphics processing units designed by companies such as Nvidia and AMD often include AI-specific hardware, and are commonly used as AI accelerators, both for training and inference.[9]

History edit

Computer systems have frequently complemented the CPU with special-purpose accelerators for specialized tasks, known as coprocessors. Notable application-specific hardware units include video cards for graphics, sound cards, graphics processing units and digital signal processors. As deep learning and artificial intelligence workloads rose in prominence in the 2010s, specialized hardware units were developed or adapted from existing products to accelerate these tasks.

Early attempts edit

First attempts like Intel's ETANN 80170NX incorporated analog circuits to compute neural functions.[10]

Later all-digital chips like the Nestor/Intel Ni1000 followed. As early as 1993, digital signal processors were used as neural network accelerators to accelerate optical character recognition software.[11]

By 1988, Wei Zhang et al. had discussed fast optical implementations of convolutional neural networks for alphabet recognition.[12][13]

In the 1990s, there were also attempts to create parallel high-throughput systems for workstations aimed at various applications, including neural network simulations.[14][15]

This presentation covers a past attempt at neural net accelerators, notes the similarity to the modern SLI GPGPU processor setup, and argues that general purpose vector accelerators are the way forward (in relation to RISC-V hwacha project. Argues that NN's are just dense and sparse matrices, one of several recurring algorithms)[16]

FPGA-based accelerators were also first explored in the 1990s for both inference and training.[17][18]

In 2014, Chen et al. proposed DianNao (Chinese for "electric brain"),[19] to accelerate deep neural networks especially. DianNao provides the 452 Gop/s peak performance (of key operations in deep neural networks) only in a small footprint of 3.02 mm2 and 485 mW. Later, the successors (DaDianNao,[20] ShiDianNao,[21] PuDianNao[22]) are proposed by the same group, forming the DianNao Family[23]

Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015.[24][25]

Heterogeneous computing edit

Heterogeneous computing incorporates many specialized processors in a single system, or a single chip, each optimized for a specific type of task. Architectures such as the Cell microprocessor[26] have features significantly overlapping with AI accelerators including: support for packed low precision arithmetic, dataflow architecture, and prioritizing throughput over latency. The Cell microprocessor has been applied to a number of tasks[27][28][29] including AI.[30][31][32]

In the 2000s, CPUs also gained increasingly wide SIMD units, driven by video and gaming workloads; as well as support for packed low-precision data types.[33] Due to the increasing performance of CPUs, they are also used for running AI workloads. CPUs are superior for DNNs with small or medium-scale parallelism, for sparse DNNs and in low-batch-size scenarios.

Use of GPU edit

Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties. The mathematical basis of neural networks and image manipulation are similar, embarrassingly parallel tasks involving matrices, leading GPUs to become increasingly used for machine learning tasks.[34][35]

In 2012, Alex Krizhevsky adopted two GPUs to train a deep learning network, i.e., AlexNet,[36] which won the champion of the ISLVRC-2012 competition. During the 2010's, GPU manufacturers such as Nvidia added deep learning related features in both hardware (e.g., INT8 operators) and software (e.g., cuDNN Library).

GPUs continue to be used in large-scale AI applications. For example, Summit, a supercomputer from IBM for Oak Ridge National Laboratory,[37] contains 27,648 Nvidia Tesla V100 cards, which can be used to accelerate deep learning algorithms.

Over the 2010's GPUs continued to evolve in a direction to facilitate deep learning, both for training and inference in devices such as self-driving cars.[38][39] GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from. As GPUs have been increasingly applied to AI acceleration, GPU manufacturers have incorporated neural network-specific hardware to further accelerate these tasks.[40][41] Tensor cores are intended to speed up the training of neural networks.[41]

Use of FPGAs edit

Deep learning frameworks are still evolving, making it hard to design custom hardware. Reconfigurable devices such as field-programmable gate arrays (FPGA) make it easier to evolve hardware, frameworks, and software alongside each other.[42][17][18][43]

Microsoft has used FPGA chips to accelerate inference for real-time deep learning services.[44]

Emergence of dedicated AI accelerator ASICs edit

While GPUs and FPGAs perform far better than CPUs for AI-related tasks, a factor of up to 10 in efficiency[45][46] may be gained with a more specific design, via an application-specific integrated circuit (ASIC).[citation needed] These accelerators employ strategies such as optimized memory use[citation needed] and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation.[47][48] Some low-precision floating-point formats used for AI acceleration are half-precision and the bfloat16 floating-point format.[49][50][51][52][53][54][55] Companies such as Google, Qualcomm, Amazon, Apple, Facebook, AMD and Samsung are all designing their own AI ASICs.[56][57][58][59][60][61] Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry, the second-generation Wafer Scale Engine (WSE-2), to support deep learning workloads.[62][63]

Ongoing research edit

In-memory computing architectures edit

In June 2017, IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in-memory computing and phase-change memory arrays applied to temporal correlation detection, intending to generalize the approach to heterogeneous computing and massively parallel systems.[64] In October 2018, IBM researchers announced an architecture based on in-memory processing and modeled on the human brain's synaptic network to accelerate deep neural networks.[65] The system is based on phase-change memory arrays.[66]

In-memory computing with analog resistive memories edit

In 2019, researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation. Their algorithm is based on in-memory computing with analog resistive memories which performs with high efficiencies of time and energy, via conducting matrix–vector multiplication in one step using Ohm's law and Kirchhoff's law. The researchers showed that a feedback circuit with cross-point resistive memories can solve algebraic problems such as systems of linear equations, matrix eigenvectors, and differential equations in just one step. Such an approach improves computational times drastically in comparison with digital algorithms.[67]

Atomically thin semiconductors edit

In 2020, Marega et al. published experiments with a large-area active channel material for developing logic-in-memory devices and circuits based on floating-gate field-effect transistors (FGFETs).[68] Such atomically thin semiconductors are considered promising for energy-efficient machine learning applications, where the same basic device structure is used for both logic operations and data storage. The authors used two-dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements. [68]

Integrated photonic tensor core edit

In 1988, Wei Zhang et al. discussed fast optical implementations of convolutional neural networks for alphabet recognition.[12][13] In 2021, J. Feldmann et al. proposed an integrated photonic hardware accelerator for parallel convolutional processing.[69] The authors identify two key advantages of integrated photonics over its electronic counterparts: (1) massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs, and (2) extremely high data modulation speeds.[69] Their system can execute trillions of multiply-accumulate operations per second, indicating the potential of integrated photonics in data-heavy AI applications.[69]

Nomenclature edit

As of 2016, the field is still in flux and vendors are pushing their own marketing term for what amounts to an "AI accelerator", in the hope that their designs and APIs will become the dominant design. There is no consensus on the boundary between these devices, nor the exact form they will take; however several examples clearly aim to fill this new space, with a fair amount of overlap in capabilities.

In the past when consumer graphics accelerators emerged, the industry eventually adopted Nvidia's self-assigned term, "the GPU",[70] as the collective noun for "graphics accelerators", which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D.

All models of Intel Meteor Lake processors have a Versatile Processor Unit (VPU) built-in for accelerating inference for computer vision and deep learning.[71]

Deep Learning Processors (DLP) edit

Inspired from the pioneer work of DianNao Family, many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency. Only at ISCA 2016, three sessions, 15% (!) of the accepted papers, are all architecture designs about deep learning. Such efforts include Eyeriss (MIT),[72] EIE (Stanford),[73] Minerva (Harvard),[74] Stripes (University of Toronto) in academia,[75] TPU (Google),[76] and MLU (Cambricon) in industry.[77] We listed several representative works in Table 1.

Table 1. Typical DLPs
Year DLPs Institution Type Computation Memory Hierarchy Control Peak Performance
2014 DianNao[19] ICT, CAS digital vector MACs scratchpad VLIW 452 Gops (16-bit)
DaDianNao[20] ICT, CAS digital vector MACs scratchpad VLIW 5.58 Tops (16-bit)
2015 ShiDianNao[21] ICT, CAS digital scalar MACs scratchpad VLIW 194 Gops (16-bit)
PuDianNao[22] ICT, CAS digital vector MACs scratchpad VLIW 1,056 Gops (16-bit)
2016 DnnWeaver Georgia Tech digital Vector MACs scratchpad - -
EIE[73] Stanford digital scalar MACs scratchpad - 102 Gops (16-bit)
Eyeriss[72] MIT digital scalar MACs scratchpad - 67.2 Gops (16-bit)
Prime[78] UCSB hybrid Process-in-Memory ReRAM - -
2017 TPU[76] Google digital scalar MACs scratchpad CISC 92 Tops (8-bit)
PipeLayer[79] U of Pittsburgh hybrid Process-in-Memory ReRAM -
FlexFlow ICT, CAS digital scalar MACs scratchpad - 420 Gops ()
DNPU[80] KAIST digital scalar MACS scratchpad - 300 Gops(16bit)

1200 Gops(4bit)

2018 MAERI Georgia Tech digital scalar MACs scratchpad -
PermDNN City University of New York digital vector MACs scratchpad - 614.4 Gops (16-bit)
UNPU[81] KAIST digital scalar MACs scratchpad - 345.6 Gops(16bit)

691.2 Gops(8b) 1382 Gops(4bit) 7372 Gops(1bit)

2019 FPSA Tsinghua hybrid Process-in-Memory ReRAM -
Cambricon-F ICT, CAS digital vector MACs scratchpad FISA 14.9 Tops (F1, 16-bit)

956 Tops (F100, 16-bit)

Digital DLPs edit

The major components of DLPs architecture usually include a computation component, the on-chip memory hierarchy, and the control logic that manages the data communication and computing flows.

Regarding the computation component, as most operations in deep learning can be aggregated into vector operations, the most common ways for building computation components in digital DLPs are the MAC-based (multiplier-accumulation) organization, either with vector MACs[19][20][22] or scalar MACs.[76][21][72] Rather than SIMD or SIMT in general processing devices, deep learning domain-specific parallelism is better explored on these MAC-based organizations. Regarding the memory hierarchy, as deep learning algorithms require high bandwidth to provide the computation component with sufficient data, DLPs usually employ a relatively larger size (tens of kilobytes or several megabytes) on-chip buffer but with dedicated on-chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth. For example, DianNao, 16 16-in vector MAC, requires 16 × 16 × 2 = 512 16-bit data, i.e., almost 1024GB/s bandwidth requirements between computation components and buffers. With on-chip reuse, such bandwidth requirements are reduced drastically.[19] Instead of the widely used cache in general processing devices, DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms. Regarding the control logic, as the deep learning algorithms keep evolving at a dramatic speed, DLPs start to leverage dedicated ISA (instruction set architecture) to support the deep learning domain flexibly. At first, DianNao used a VLIW-style instruction set where each instruction could finish a layer in a DNN. Cambricon[82] introduces the first deep learning domain-specific ISA, which could support more than ten different deep learning algorithms. TPU also reveals five key instructions from the CISC-style ISA.

Hybrid DLPs edit

Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency. Processing-in-memory (PIM) architectures are one most important type of hybrid DLP. The key design concept of PIM is to bridge the gap between computing and memory, with the following manners: 1) Moving computation components into memory cells, controllers, or memory chips to alleviate the memory wall issue.[79][83][84] Such architectures significantly shorten data paths and leverage much higher internal bandwidth, hence resulting in attractive performance improvement. 2) Build high efficient DNN engines by adopting computational devices. In 2013, HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing.[85] Inspiring by this work, tremendous work are proposed to explore the new architecture and system design based on ReRAM,[78][86][87][79] phase change memory,[83][88][89] etc.

Benchmarks edit

Benchmarks such as MLPerf and others may be used to evaluate the performance of AI accelerators.[90] Table 2 lists several typical benchmarks for AI accelerators.

Table 2. Benchmarks.
Year NN Benchmark Affiliations # of microbenchmarks # of component benchmarks # of application benchmarks
2012 BenchNN ICT, CAS N/A 12 N/A
2016 Fathom Harvard N/A 8 N/A
2017 BenchIP ICT, CAS 12 11 N/A
2017 DAWNBench Stanford 8 N/A N/A
2017 DeepBench Baidu 4 N/A N/A
2018 MLPerf Harvard, Intel, and Google, etc. N/A 7 N/A
2019 AIBench ICT, CAS and Alibaba, etc. 12 16 2
2019 NNBench-X UCSB N/A 10 N/A

Potential applications edit

See also edit

References edit

  1. ^ . July 21, 2017. Archived from the original on August 11, 2017. Retrieved August 11, 2017.
  2. ^ "Inspurs unveils GX4 AI Accelerator". June 21, 2017.
  3. ^ Wiggers, Kyle (November 6, 2019) [2019], , archived from the original on March 6, 2020, retrieved March 14, 2020
  4. ^ "Google Designing AI Processors". Google using its own AI accelerators.
  5. ^ Moss, Sebastian (March 23, 2022). "Nvidia reveals new Hopper H100 GPU, with 80 billion transistors". Data Center Dynamics. Retrieved January 30, 2024.
  6. ^ "Deploying Transformers on the Apple Neural Engine". Apple Machine Learning Research. Retrieved August 24, 2023.
  7. ^ "HUAWEI Reveals the Future of Mobile AI at IFA".
  8. ^ Jouppi, Norman P.; et al. (June 24, 2017). "In-Datacenter Performance Analysis of a Tensor Processing Unit". ACM SIGARCH Computer Architecture News. 45 (2): 1–12. arXiv:1704.04760. doi:10.1145/3140659.3080246.
  9. ^ Patel, Dylan; Nishball, Daniel; Xie, Myron (November 9, 2023). "Nvidia's New China AI Chips Circumvent US Restrictions". SemiAnalysis. Retrieved February 7, 2024.
  10. ^ Dvorak, J.C. (May 29, 1990). "Inside Track". PC Magazine. Retrieved December 26, 2023.
  11. ^ "convolutional neural network demo from 1993 featuring DSP32 accelerator". YouTube.
  12. ^ a b Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of Annual Conference of the Japan Society of Applied Physics.
  13. ^ a b Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468.
  14. ^ Asanović, K.; Beck, J.; Feldman, J.; Morgan, N.; Wawrzynek, J. (January 1994). "Designing a connectionist network supercomputer". International Journal of Neural Systems. ResearchGate. 4 (4): 317–26. doi:10.1142/S0129065793000250. Retrieved December 26, 2023.
  15. ^ "The end of general purpose computers (not)". YouTube.
  16. ^ Ramacher, U.; Raab, W.; Hachmann, J.A.U.; Beichter, J.; Bruls, N.; Wesseling, M.; Sicheneder, E.; Glass, J.; Wurz, A.; Manner, R. (1995). Proceedings of 9th International Parallel Processing Symposium. pp. 774–781. CiteSeerX 10.1.1.27.6410. doi:10.1109/IPPS.1995.395862. ISBN 978-0-8186-7074-9. S2CID 16364797.
  17. ^ a b Gschwind, M.; Salapura, V.; Maischberger, O. (February 1995). "Space Efficient Neural Net Implementation". ResearchGate. Retrieved December 26, 2023.
  18. ^ a b Gschwind, M.; Salapura, V.; Maischberger, O. (1996). "A Generic Building Block for Hopfield Neural Networks with On-Chip Learning". 1996 IEEE International Symposium on Circuits and Systems. Circuits and Systems Connecting the World. ISCAS 96. pp. 49–52. doi:10.1109/ISCAS.1996.598474. ISBN 0-7803-3073-0. S2CID 17630664.
  19. ^ a b c d Chen, Tianshi; Du, Zidong; Sun, Ninghui; Wang, Jia; Wu, Chengyong; Chen, Yunji; Temam, Olivier (April 5, 2014). "DianNao". ACM SIGARCH Computer Architecture News. 42 (1): 269–284. doi:10.1145/2654822.2541967. ISSN 0163-5964.
  20. ^ a b c Chen, Yunji; Luo, Tao; Liu, Shaoli; Zhang, Shijin; He, Liqiang; Wang, Jia; Li, Ling; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (December 2014). "DaDianNao: A Machine-Learning Supercomputer". 2014 47th Annual IEEE/ACM International Symposium on Microarchitecture. IEEE. pp. 609–622. doi:10.1109/micro.2014.58. ISBN 978-1-4799-6998-2. S2CID 6838992.
  21. ^ a b c Du, Zidong; Fasthuber, Robert; Chen, Tianshi; Ienne, Paolo; Li, Ling; Luo, Tao; Feng, Xiaobing; Chen, Yunji; Temam, Olivier (January 4, 2016). "ShiDianNao". ACM SIGARCH Computer Architecture News. 43 (3S): 92–104. doi:10.1145/2872887.2750389. ISSN 0163-5964.
  22. ^ a b c Liu, Daofu; Chen, Tianshi; Liu, Shaoli; Zhou, Jinhong; Zhou, Shengyuan; Teman, Olivier; Feng, Xiaobing; Zhou, Xuehai; Chen, Yunji (May 29, 2015). "PuDianNao". ACM SIGARCH Computer Architecture News. 43 (1): 369–381. doi:10.1145/2786763.2694358. ISSN 0163-5964.
  23. ^ Chen, Yunji; Chen, Tianshi; Xu, Zhiwei; Sun, Ninghui; Temam, Olivier (October 28, 2016). "DianNao family". Communications of the ACM. 59 (11): 105–112. doi:10.1145/2996864. ISSN 0001-0782. S2CID 207243998.
  24. ^ "Qualcomm Helps Make Your Mobile Devices Smarter With New Snapdragon Machine Learning Software Development Kit". Qualcomm.
  25. ^ Rubin, Ben Fox. "Qualcomm's Zeroth platform could make your smartphone much smarter". CNET. Retrieved September 28, 2021.
  26. ^ Gschwind, Michael; Hofstee, H. Peter; Flachs, Brian; Hopkins, Martin; Watanabe, Yukio; Yamazaki, Takeshi (2006). "Synergistic Processing in Cell's Multicore Architecture". IEEE Micro. 26 (2): 10–24. doi:10.1109/MM.2006.41. S2CID 17834015.
  27. ^ De Fabritiis, G. (2007). "Performance of Cell processor for biomolecular simulations". Computer Physics Communications. 176 (11–12): 660–664. arXiv:physics/0611201. Bibcode:2007CoPhC.176..660D. doi:10.1016/j.cpc.2007.02.107. S2CID 13871063.
  28. ^ Video Processing and Retrieval on Cell architecture. CiteSeerX 10.1.1.138.5133.
  29. ^ Benthin, Carsten; Wald, Ingo; Scherbaum, Michael; Friedrich, Heiko (2006). 2006 IEEE Symposium on Interactive Ray Tracing. pp. 15–23. CiteSeerX 10.1.1.67.8982. doi:10.1109/RT.2006.280210. ISBN 978-1-4244-0693-7. S2CID 1198101.
  30. ^ (PDF). Archived from the original (PDF) on August 30, 2017. Retrieved November 14, 2017.
  31. ^ Kwon, Bomjun; Choi, Taiho; Chung, Heejin; Kim, Geonho (2008). 2008 5th IEEE Consumer Communications and Networking Conference. pp. 1030–1034. doi:10.1109/ccnc08.2007.235. ISBN 978-1-4244-1457-4. S2CID 14429828.
  32. ^ Duan, Rubing; Strey, Alfred (2008). Euro-Par 2008 – Parallel Processing. Lecture Notes in Computer Science. Vol. 5168. pp. 665–675. doi:10.1007/978-3-540-85451-7_71. ISBN 978-3-540-85450-0.
  33. ^ "Improving the performance of video with AVX". February 8, 2012.
  34. ^ Chellapilla, K.; Sidd Puri; Simard, P. (October 23, 2006). "High Performance Convolutional Neural Networks for Document Processing". 10th International Workshop on Frontiers in Handwriting Recognition. Retrieved December 23, 2023.
  35. ^ Krizhevsky, A.; Sutskever, I.; Hinton, G.E. (May 24, 2017). "ImageNet Classification with Deep Convolutional Neural Networks". Communications of the ACM. 60 (6): 84–90. doi:10.1145/3065386. Retrieved December 23, 2023.
  36. ^ Krizhevsky, Alex; Sutskever, Ilya; Hinton, Geoffrey E (May 24, 2017). "ImageNet classification with deep convolutional neural networks". Communications of the ACM. 60 (6): 84–90. doi:10.1145/3065386.
  37. ^ "Summit: Oak Ridge National Laboratory's 200 petaflop supercomputer". United States Department of Energy. 2024. Retrieved January 8, 2024.
  38. ^ Roe, R. (May 17, 2023). "Nvidia in the Driver's Seat for Deep Learning". insideHPC. Retrieved December 23, 2023.
  39. ^ Bohn, D. (January 5, 2016). "Nvidia announces 'supercomputer' for self-driving cars at CES 2016". Vox Media. Retrieved December 23, 2023.
  40. ^ "A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform", 2019
  41. ^ a b Harris, Mark (May 11, 2017). "CUDA 9 Features Revealed: Volta, Cooperative Groups and More". Retrieved August 12, 2017.
  42. ^ Sefat, Md Syadus; Aslan, Semih; Kellington, Jeffrey W; Qasem, Apan (August 2019). "Accelerating HotSpots in Deep Neural Networks on a CAPI-Based FPGA". 2019 IEEE 21st International Conference on High Performance Computing and Communications; IEEE 17th International Conference on Smart City; IEEE 5th International Conference on Data Science and Systems (HPCC/SmartCity/DSS). pp. 248–256. doi:10.1109/HPCC/SmartCity/DSS.2019.00048. ISBN 978-1-7281-2058-4. S2CID 203656070.
  43. ^ "FPGA Based Deep Learning Accelerators Take on ASICs". The Next Platform. August 23, 2016. Retrieved September 7, 2016.
  44. ^ "Microsoft unveils Project Brainwave for real-time AI". Microsoft. August 22, 2017.
  45. ^ "Google boosts machine learning with its Tensor Processing Unit". May 19, 2016. Retrieved September 13, 2016.
  46. ^ "Chip could bring deep learning to mobile devices". www.sciencedaily.com. February 3, 2016. Retrieved September 13, 2016.
  47. ^ "Deep Learning with Limited Numerical Precision" (PDF).
  48. ^ Rastegari, Mohammad; Ordonez, Vicente; Redmon, Joseph; Farhadi, Ali (2016). "XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks". arXiv:1603.05279 [cs.CV].
  49. ^ Khari Johnson (May 23, 2018). "Intel unveils Nervana Neural Net L-1000 for accelerated AI training". VentureBeat. Retrieved May 23, 2018. ...Intel will be extending bfloat16 support across our AI product lines, including Intel Xeon processors and Intel FPGAs.
  50. ^ Michael Feldman (May 23, 2018). "Intel Lays Out New Roadmap for AI Portfolio". TOP500 Supercomputer Sites. Retrieved May 23, 2018. Intel plans to support this format across all their AI products, including the Xeon and FPGA lines
  51. ^ Lucian Armasu (May 23, 2018). "Intel To Launch Spring Crest, Its First Neural Network Processor, In 2019". Tom's Hardware. Retrieved May 23, 2018. Intel said that the NNP-L1000 would also support bfloat16, a numerical format that's being adopted by all the ML industry players for neural networks. The company will also support bfloat16 in its FPGAs, Xeons, and other ML products. The Nervana NNP-L1000 is scheduled for release in 2019.
  52. ^ "Available TensorFlow Ops | Cloud TPU | Google Cloud". Google Cloud. Retrieved May 23, 2018. This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU.
  53. ^ Elmar Haußmann (April 26, 2018). . RiseML Blog. Archived from the original on April 26, 2018. Retrieved May 23, 2018. For the Cloud TPU, Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1.7.0. Both the TPU and GPU implementations make use of mixed-precision computation on the respective architecture and store most tensors with half-precision.
  54. ^ Tensorflow Authors (February 28, 2018). "ResNet-50 using BFloat16 on TPU". Google. Retrieved May 23, 2018.[permanent dead link]
  55. ^ Joshua V. Dillon; Ian Langmore; Dustin Tran; Eugene Brevdo; Srinivas Vasudevan; Dave Moore; Brian Patton; Alex Alemi; Matt Hoffman; Rif A. Saurous (November 28, 2017). TensorFlow Distributions (Report). arXiv:1711.10604. Bibcode:2017arXiv171110604D. Accessed May 23, 2018. All operations in TensorFlow Distributions are numerically stable across half, single, and double floating-point precisions (as TensorFlow dtypes: tf.bfloat16 (truncated floating point), tf.float16, tf.float32, tf.float64). Class constructors have a validate_args flag for numerical asserts
  56. ^ "Google Reveals a Powerful New AI Chip and Supercomputer". MIT Technology Review. Retrieved July 27, 2021.
  57. ^ "What to Expect From Apple's Neural Engine in the A11 Bionic SoC – ExtremeTech". www.extremetech.com. Retrieved July 27, 2021.
  58. ^ "Facebook has a new job posting calling for chip designers". April 19, 2018.[permanent dead link]
  59. ^ "Facebook joins Amazon and Google in AI chip race". Financial Times. February 18, 2019.
  60. ^ Amadeo, Ron (May 11, 2021). "Samsung and AMD will reportedly take on Apple's M1 SoC later this year". Ars Technica. Retrieved July 28, 2021.
  61. ^ Smith, Ryan. "The AI Race Expands: Qualcomm Reveals "Cloud AI 100" Family of Datacenter AI Inference Accelerators for 2020". www.anandtech.com. Retrieved September 28, 2021.
  62. ^ Woodie, Alex (November 1, 2021). "Cerebras Hits the Accelerator for Deep Learning Workloads". Datanami. Retrieved August 3, 2022.
  63. ^ "Cerebras launches new AI supercomputing processor with 2.6 trillion transistors". VentureBeat. April 20, 2021. Retrieved August 3, 2022.
  64. ^ Abu Sebastian; Tomas Tuma; Nikolaos Papandreou; Manuel Le Gallo; Lukas Kull; Thomas Parnell; Evangelos Eleftheriou (2017). "Temporal correlation detection using computational phase-change memory". Nature Communications. 8 (1): 1115. arXiv:1706.00511. Bibcode:2017NatCo...8.1115S. doi:10.1038/s41467-017-01481-9. PMC 5653661. PMID 29062022.
  65. ^ "A new brain-inspired architecture could improve how computers handle data and advance AI". American Institute of Physics. October 3, 2018. Retrieved October 5, 2018.
  66. ^ Carlos Ríos; Nathan Youngblood; Zengguang Cheng; Manuel Le Gallo; Wolfram H.P. Pernice; C. David Wright; Abu Sebastian; Harish Bhaskaran (2018). "In-memory computing on a photonic platform". Science Advances. 5 (2): eaau5759. arXiv:1801.06228. Bibcode:2019SciA....5.5759R. doi:10.1126/sciadv.aau5759. PMC 6377270. PMID 30793028. S2CID 7637801.
  67. ^ Zhong Sun; Giacomo Pedretti; Elia Ambrosi; Alessandro Bricalli; Wei Wang; Daniele Ielmini (2019). "Solving matrix equations in one step with cross-point resistive arrays". Proceedings of the National Academy of Sciences. 116 (10): 4123–4128. Bibcode:2019PNAS..116.4123S. doi:10.1073/pnas.1815682116. PMC 6410822. PMID 30782810.
  68. ^ a b Marega, Guilherme Migliato; Zhao, Yanfei; Avsar, Ahmet; Wang, Zhenyu; Tripati, Mukesh; Radenovic, Aleksandra; Kis, Anras (2020). "Logic-in-memory based on an atomically thin semiconductor". Nature. 587 (2): 72–77. Bibcode:2020Natur.587...72M. doi:10.1038/s41586-020-2861-0. PMC 7116757. PMID 33149289.
  69. ^ a b c Feldmann, J.; Youngblood, N.; Karpov, M.; et al. (2021). "Parallel convolutional processing using an integrated photonic tensor". Nature. 589 (2): 52–58. arXiv:2002.00281. doi:10.1038/s41586-020-03070-1. PMID 33408373. S2CID 211010976.
  70. ^ . Archived from the original on February 27, 2016.
  71. ^ "Intel to Bring a 'VPU' Processor Unit to 14th Gen Meteor Lake Chips". PCMAG.
  72. ^ a b c Chen, Yu-Hsin; Emer, Joel; Sze, Vivienne (2017). "Eyeriss: A Spatial Architecture for Energy-Efficient Dataflow for Convolutional Neural Networks". IEEE Micro: 1. doi:10.1109/mm.2017.265085944. hdl:1721.1/102369. ISSN 0272-1732.
  73. ^ a b Han, Song; Liu, Xingyu; Mao, Huizi; Pu, Jing; Pedram, Ardavan; Horowitz, Mark A.; Dally, William J. (February 3, 2016). EIE: Efficient Inference Engine on Compressed Deep Neural Network. OCLC 1106232247.
  74. ^ Reagen, Brandon; Whatmough, Paul; Adolf, Robert; Rama, Saketh; Lee, Hyunkwang; Lee, Sae Kyu; Hernandez-Lobato, Jose Miguel; Wei, Gu-Yeon; Brooks, David (June 2016). "Minerva: Enabling Low-Power, Highly-Accurate Deep Neural Network Accelerators". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). Seoul: IEEE. pp. 267–278. doi:10.1109/ISCA.2016.32. ISBN 978-1-4673-8947-1.
  75. ^ Judd, Patrick; Albericio, Jorge; Moshovos, Andreas (January 1, 2017). "Stripes: Bit-Serial Deep Neural Network Computing". IEEE Computer Architecture Letters. 16 (1): 80–83. doi:10.1109/lca.2016.2597140. ISSN 1556-6056. S2CID 3784424.
  76. ^ a b c Jouppi, N.; Young, C.; Patil, N.; Patterson, D. (June 24, 2017). "In-Datacenter Performance Analysis of a Tensor Processing Unit". Association for Computing Machinery. pp. 1–12. doi:10.1145/3079856.3080246. ISBN 9781450348928. S2CID 4202768. Retrieved January 8, 2024.
  77. ^ "MLU 100 intelligence accelerator card" (in Japanese). Cambricon. 2024. Retrieved January 8, 2024.
  78. ^ a b Chi, Ping; Li, Shuangchen; Xu, Cong; Zhang, Tao; Zhao, Jishen; Liu, Yongpan; Wang, Yu; Xie, Yuan (June 2016). "PRIME: A Novel Processing-in-Memory Architecture for Neural Network Computation in ReRAM-Based Main Memory". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 27–39. doi:10.1109/isca.2016.13. ISBN 978-1-4673-8947-1.
  79. ^ a b c Song, Linghao; Qian, Xuehai; Li, Hai; Chen, Yiran (February 2017). "PipeLayer: A Pipelined ReRAM-Based Accelerator for Deep Learning". 2017 IEEE International Symposium on High Performance Computer Architecture (HPCA). IEEE. pp. 541–552. doi:10.1109/hpca.2017.55. ISBN 978-1-5090-4985-1. S2CID 15281419.
  80. ^ Shin, Dongjoo; Lee, Jinmook; Lee, Jinsu; Yoo, Hoi-Jun (2017). "14.2 DNPU: An 8.1TOPS/W reconfigurable CNN-RNN processor for general-purpose deep neural networks". 2017 IEEE International Solid-State Circuits Conference (ISSCC). pp. 240–241. doi:10.1109/ISSCC.2017.7870350. ISBN 978-1-5090-3758-2. S2CID 206998709. Retrieved August 24, 2023.
  81. ^ Lee, Jinmook; Kim, Changhyeon; Kang, Sanghoon; Shin, Dongjoo; Kim, Sangyeob; Yoo, Hoi-Jun (2018). "UNPU: A 50.6TOPS/W unified deep neural network accelerator with 1b-to-16b fully-variable weight bit-precision". 2018 IEEE International Solid - State Circuits Conference - (ISSCC). pp. 218–220. doi:10.1109/ISSCC.2018.8310262. ISBN 978-1-5090-4940-0. S2CID 3861747. Retrieved November 30, 2023.
  82. ^ Liu, Shaoli; Du, Zidong; Tao, Jinhua; Han, Dong; Luo, Tao; Xie, Yuan; Chen, Yunji; Chen, Tianshi (June 2016). "Cambricon: An Instruction Set Architecture for Neural Networks". 2016 ACM/IEEE 43rd Annual International Symposium on Computer Architecture (ISCA). IEEE. pp. 393–405. doi:10.1109/isca.2016.42. ISBN 978-1-4673-8947-1.
  83. ^ a b Ambrogio, Stefano; Narayanan, Pritish; Tsai, Hsinyu; Shelby, Robert M.; Boybat, Irem; di Nolfo, Carmelo; Sidler, Severin; Giordano, Massimo; Bodini, Martina; Farinha, Nathan C. P.; Killeen, Benjamin (June 2018). "Equivalent-accuracy accelerated neural-network training using analogue memory". Nature. 558 (7708): 60–67. Bibcode:2018Natur.558...60A. doi:10.1038/s41586-018-0180-5. ISSN 0028-0836. PMID 29875487. S2CID 46956938.
  84. ^ Chen, Wei-Hao; Lin, Wen-Jang; Lai, Li-Ya; Li, Shuangchen; Hsu, Chien-Hua; Lin, Huan-Ting; Lee, Heng-Yuan; Su, Jian-Wei; Xie, Yuan; Sheu, Shyh-Shyuan; Chang, Meng-Fan (December 2017). "A 16Mb dual-mode ReRAM macro with sub-14ns computing-in-memory and memory functions enabled by self-write termination scheme". 2017 IEEE International Electron Devices Meeting (IEDM). IEEE. pp. 28.2.1–28.2.4. doi:10.1109/iedm.2017.8268468. ISBN 978-1-5386-3559-9. S2CID 19556846.
  85. ^ Yang, J. Joshua; Strukov, Dmitri B.; Stewart, Duncan R. (January 2013). "Memristive devices for computing". Nature Nanotechnology. 8 (1): 13–24. Bibcode:2013NatNa...8...13Y. doi:10.1038/nnano.2012.240. ISSN 1748-3395. PMID 23269430.
  86. ^ Shafiee, Ali; Nag, Anirban; Muralimanohar, Naveen; Balasubramonian, Rajeev; Strachan, John Paul; Hu, Miao; Williams, R. Stanley; Srikumar, Vivek (October 12, 2016). "ISAAC". ACM SIGARCH Computer Architecture News. 44 (3): 14–26. doi:10.1145/3007787.3001139. ISSN 0163-5964. S2CID 6329628.
  87. ^ Ji, Yu Zhang, Youyang Xie, Xinfeng Li, Shuangchen Wang, Peiqi Hu, Xing Zhang, Youhui Xie, Yuan (January 27, 2019). FPSA: A Full System Stack Solution for Reconfigurable ReRAM-based NN Accelerator Architecture. OCLC 1106329050.{{cite book}}: CS1 maint: multiple names: authors list (link)
  88. ^ Nandakumar, S. R.; Boybat, Irem; Joshi, Vinay; Piveteau, Christophe; Le Gallo, Manuel; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (November 2019). "Phase-Change Memory Models for Deep Learning Training and Inference". 2019 26th IEEE International Conference on Electronics, Circuits and Systems (ICECS). IEEE. pp. 727–730. doi:10.1109/icecs46596.2019.8964852. ISBN 978-1-7281-0996-1. S2CID 210930121.
  89. ^ Joshi, Vinay; Le Gallo, Manuel; Haefeli, Simon; Boybat, Irem; Nandakumar, S. R.; Piveteau, Christophe; Dazzi, Martino; Rajendran, Bipin; Sebastian, Abu; Eleftheriou, Evangelos (May 18, 2020). "Accurate deep neural network inference using computational phase-change memory". Nature Communications. 11 (1): 2473. arXiv:1906.03138. Bibcode:2020NatCo..11.2473J. doi:10.1038/s41467-020-16108-9. ISSN 2041-1723. PMC 7235046. PMID 32424184.
  90. ^ "Nvidia claims 'record performance' for Hopper MLPerf debut".
  91. ^ (PDF). University of Florida. CiteSeerX 10.1.1.7.342. Archived from the original (PDF) on June 23, 2010.
  92. ^ "Self-Driving Cars Technology & Solutions from NVIDIA Automotive". NVIDIA.
  93. ^ "movidius powers worlds most intelligent drone". March 16, 2016.
  94. ^ "Qualcomm Research brings server class machine learning to everyday devices–making them smarter [VIDEO]". October 2015.

External links edit

  • Nvidia Puts The Accelerator To The Metal With Pascal.htm, The Next Platform
  • Eyeriss Project, MIT
  • https://alphaics.ai/

accelerator, deep, learning, processor, neural, processing, unit, class, specialized, hardware, accelerator, computer, system, designed, accelerate, artificial, intelligence, machine, learning, applications, including, artificial, neural, networks, machine, vi. An AI accelerator deep learning processor or neural processing unit is a class of specialized hardware accelerator 1 or computer system 2 3 designed to accelerate artificial intelligence and machine learning applications including artificial neural networks and machine vision Typical applications include algorithms for robotics Internet of Things and other data intensive or sensor driven tasks 4 They are often manycore designs and generally focus on low precision arithmetic novel dataflow architectures or in memory computing capability As of 2024 update a typical AI integrated circuit chip contains tens of billions of MOSFET transistors 5 AI accelerators are used in mobile devices such as neural processing units NPUs in Apple iPhones 6 or Huawei cellphones 7 and personal computers such as Apple silicon Macs to cloud computing servers such as tensor processing units TPU in the Google Cloud Platform 8 A number of vendor specific terms exist for devices in this category and it is an emerging technology without a dominant design Graphics processing units designed by companies such as Nvidia and AMD often include AI specific hardware and are commonly used as AI accelerators both for training and inference 9 Contents 1 History 1 1 Early attempts 1 2 Heterogeneous computing 1 3 Use of GPU 1 4 Use of FPGAs 1 5 Emergence of dedicated AI accelerator ASICs 2 Ongoing research 2 1 In memory computing architectures 2 2 In memory computing with analog resistive memories 2 3 Atomically thin semiconductors 2 4 Integrated photonic tensor core 3 Nomenclature 4 Deep Learning Processors DLP 4 1 Digital DLPs 4 2 Hybrid DLPs 5 Benchmarks 6 Potential applications 7 See also 8 References 9 External linksHistory editComputer systems have frequently complemented the CPU with special purpose accelerators for specialized tasks known as coprocessors Notable application specific hardware units include video cards for graphics sound cards graphics processing units and digital signal processors As deep learning and artificial intelligence workloads rose in prominence in the 2010s specialized hardware units were developed or adapted from existing products to accelerate these tasks Early attempts edit First attempts like Intel s ETANN 80170NX incorporated analog circuits to compute neural functions 10 Later all digital chips like the Nestor Intel Ni1000 followed As early as 1993 digital signal processors were used as neural network accelerators to accelerate optical character recognition software 11 By 1988 Wei Zhang et al had discussed fast optical implementations of convolutional neural networks for alphabet recognition 12 13 In the 1990s there were also attempts to create parallel high throughput systems for workstations aimed at various applications including neural network simulations 14 15 This presentation covers a past attempt at neural net accelerators notes the similarity to the modern SLI GPGPU processor setup and argues that general purpose vector accelerators are the way forward in relation to RISC V hwacha project Argues that NN s are just dense and sparse matrices one of several recurring algorithms 16 FPGA based accelerators were also first explored in the 1990s for both inference and training 17 18 In 2014 Chen et al proposed DianNao Chinese for electric brain 19 to accelerate deep neural networks especially DianNao provides the 452 Gop s peak performance of key operations in deep neural networks only in a small footprint of 3 02 mm2 and 485 mW Later the successors DaDianNao 20 ShiDianNao 21 PuDianNao 22 are proposed by the same group forming the DianNao Family 23 Smartphones began incorporating AI accelerators starting with the Qualcomm Snapdragon 820 in 2015 24 25 Heterogeneous computing edit Main article Heterogeneous computing Heterogeneous computing incorporates many specialized processors in a single system or a single chip each optimized for a specific type of task Architectures such as the Cell microprocessor 26 have features significantly overlapping with AI accelerators including support for packed low precision arithmetic dataflow architecture and prioritizing throughput over latency The Cell microprocessor has been applied to a number of tasks 27 28 29 including AI 30 31 32 In the 2000s CPUs also gained increasingly wide SIMD units driven by video and gaming workloads as well as support for packed low precision data types 33 Due to the increasing performance of CPUs they are also used for running AI workloads CPUs are superior for DNNs with small or medium scale parallelism for sparse DNNs and in low batch size scenarios Use of GPU edit Graphics processing units or GPUs are specialized hardware for the manipulation of images and calculation of local image properties The mathematical basis of neural networks and image manipulation are similar embarrassingly parallel tasks involving matrices leading GPUs to become increasingly used for machine learning tasks 34 35 In 2012 Alex Krizhevsky adopted two GPUs to train a deep learning network i e AlexNet 36 which won the champion of the ISLVRC 2012 competition During the 2010 s GPU manufacturers such as Nvidia added deep learning related features in both hardware e g INT8 operators and software e g cuDNN Library GPUs continue to be used in large scale AI applications For example Summit a supercomputer from IBM for Oak Ridge National Laboratory 37 contains 27 648 Nvidia Tesla V100 cards which can be used to accelerate deep learning algorithms Over the 2010 s GPUs continued to evolve in a direction to facilitate deep learning both for training and inference in devices such as self driving cars 38 39 GPU developers such as Nvidia NVLink are developing additional connective capability for the kind of dataflow workloads AI benefits from As GPUs have been increasingly applied to AI acceleration GPU manufacturers have incorporated neural network specific hardware to further accelerate these tasks 40 41 Tensor cores are intended to speed up the training of neural networks 41 Use of FPGAs edit Deep learning frameworks are still evolving making it hard to design custom hardware Reconfigurable devices such as field programmable gate arrays FPGA make it easier to evolve hardware frameworks and software alongside each other 42 17 18 43 Microsoft has used FPGA chips to accelerate inference for real time deep learning services 44 Emergence of dedicated AI accelerator ASICs edit While GPUs and FPGAs perform far better than CPUs for AI related tasks a factor of up to 10 in efficiency 45 46 may be gained with a more specific design via an application specific integrated circuit ASIC citation needed These accelerators employ strategies such as optimized memory use citation needed and the use of lower precision arithmetic to accelerate calculation and increase throughput of computation 47 48 Some low precision floating point formats used for AI acceleration are half precision and the bfloat16 floating point format 49 50 51 52 53 54 55 Companies such as Google Qualcomm Amazon Apple Facebook AMD and Samsung are all designing their own AI ASICs 56 57 58 59 60 61 Cerebras Systems has built a dedicated AI accelerator based on the largest processor in the industry the second generation Wafer Scale Engine WSE 2 to support deep learning workloads 62 63 Ongoing research editIn memory computing architectures edit This section needs expansion You can help by adding to it October 2018 In June 2017 IBM researchers announced an architecture in contrast to the Von Neumann architecture based on in memory computing and phase change memory arrays applied to temporal correlation detection intending to generalize the approach to heterogeneous computing and massively parallel systems 64 In October 2018 IBM researchers announced an architecture based on in memory processing and modeled on the human brain s synaptic network to accelerate deep neural networks 65 The system is based on phase change memory arrays 66 In memory computing with analog resistive memories edit In 2019 researchers from Politecnico di Milano found a way to solve systems of linear equations in a few tens of nanoseconds via a single operation Their algorithm is based on in memory computing with analog resistive memories which performs with high efficiencies of time and energy via conducting matrix vector multiplication in one step using Ohm s law and Kirchhoff s law The researchers showed that a feedback circuit with cross point resistive memories can solve algebraic problems such as systems of linear equations matrix eigenvectors and differential equations in just one step Such an approach improves computational times drastically in comparison with digital algorithms 67 Atomically thin semiconductors edit In 2020 Marega et al published experiments with a large area active channel material for developing logic in memory devices and circuits based on floating gate field effect transistors FGFETs 68 Such atomically thin semiconductors are considered promising for energy efficient machine learning applications where the same basic device structure is used for both logic operations and data storage The authors used two dimensional materials such as semiconducting molybdenum disulphide to precisely tune FGFETs as building blocks in which logic operations can be performed with the memory elements 68 Integrated photonic tensor core edit In 1988 Wei Zhang et al discussed fast optical implementations of convolutional neural networks for alphabet recognition 12 13 In 2021 J Feldmann et al proposed an integrated photonic hardware accelerator for parallel convolutional processing 69 The authors identify two key advantages of integrated photonics over its electronic counterparts 1 massively parallel data transfer through wavelength division multiplexing in conjunction with frequency combs and 2 extremely high data modulation speeds 69 Their system can execute trillions of multiply accumulate operations per second indicating the potential of integrated photonics in data heavy AI applications 69 Nomenclature editAs of 2016 the field is still in flux and vendors are pushing their own marketing term for what amounts to an AI accelerator in the hope that their designs and APIs will become the dominant design There is no consensus on the boundary between these devices nor the exact form they will take however several examples clearly aim to fill this new space with a fair amount of overlap in capabilities In the past when consumer graphics accelerators emerged the industry eventually adopted Nvidia s self assigned term the GPU 70 as the collective noun for graphics accelerators which had taken many forms before settling on an overall pipeline implementing a model presented by Direct3D All models of Intel Meteor Lake processors have a Versatile Processor Unit VPU built in for accelerating inference for computer vision and deep learning 71 Deep Learning Processors DLP editInspired from the pioneer work of DianNao Family many DLPs are proposed in both academia and industry with design optimized to leverage the features of deep neural networks for high efficiency Only at ISCA 2016 three sessions 15 of the accepted papers are all architecture designs about deep learning Such efforts include Eyeriss MIT 72 EIE Stanford 73 Minerva Harvard 74 Stripes University of Toronto in academia 75 TPU Google 76 and MLU Cambricon in industry 77 We listed several representative works in Table 1 Table 1 Typical DLPsYear DLPs Institution Type Computation Memory Hierarchy Control Peak Performance2014 DianNao 19 ICT CAS digital vector MACs scratchpad VLIW 452 Gops 16 bit DaDianNao 20 ICT CAS digital vector MACs scratchpad VLIW 5 58 Tops 16 bit 2015 ShiDianNao 21 ICT CAS digital scalar MACs scratchpad VLIW 194 Gops 16 bit PuDianNao 22 ICT CAS digital vector MACs scratchpad VLIW 1 056 Gops 16 bit 2016 DnnWeaver Georgia Tech digital Vector MACs scratchpad EIE 73 Stanford digital scalar MACs scratchpad 102 Gops 16 bit Eyeriss 72 MIT digital scalar MACs scratchpad 67 2 Gops 16 bit Prime 78 UCSB hybrid Process in Memory ReRAM 2017 TPU 76 Google digital scalar MACs scratchpad CISC 92 Tops 8 bit PipeLayer 79 U of Pittsburgh hybrid Process in Memory ReRAM FlexFlow ICT CAS digital scalar MACs scratchpad 420 Gops DNPU 80 KAIST digital scalar MACS scratchpad 300 Gops 16bit 1200 Gops 4bit 2018 MAERI Georgia Tech digital scalar MACs scratchpad PermDNN City University of New York digital vector MACs scratchpad 614 4 Gops 16 bit UNPU 81 KAIST digital scalar MACs scratchpad 345 6 Gops 16bit 691 2 Gops 8b 1382 Gops 4bit 7372 Gops 1bit 2019 FPSA Tsinghua hybrid Process in Memory ReRAM Cambricon F ICT CAS digital vector MACs scratchpad FISA 14 9 Tops F1 16 bit 956 Tops F100 16 bit Digital DLPs edit The major components of DLPs architecture usually include a computation component the on chip memory hierarchy and the control logic that manages the data communication and computing flows Regarding the computation component as most operations in deep learning can be aggregated into vector operations the most common ways for building computation components in digital DLPs are the MAC based multiplier accumulation organization either with vector MACs 19 20 22 or scalar MACs 76 21 72 Rather than SIMD or SIMT in general processing devices deep learning domain specific parallelism is better explored on these MAC based organizations Regarding the memory hierarchy as deep learning algorithms require high bandwidth to provide the computation component with sufficient data DLPs usually employ a relatively larger size tens of kilobytes or several megabytes on chip buffer but with dedicated on chip data reuse strategy and data exchange strategy to alleviate the burden for memory bandwidth For example DianNao 16 16 in vector MAC requires 16 16 2 512 16 bit data i e almost 1024GB s bandwidth requirements between computation components and buffers With on chip reuse such bandwidth requirements are reduced drastically 19 Instead of the widely used cache in general processing devices DLPs always use scratchpad memory as it could provide higher data reuse opportunities by leveraging the relatively regular data access pattern in deep learning algorithms Regarding the control logic as the deep learning algorithms keep evolving at a dramatic speed DLPs start to leverage dedicated ISA instruction set architecture to support the deep learning domain flexibly At first DianNao used a VLIW style instruction set where each instruction could finish a layer in a DNN Cambricon 82 introduces the first deep learning domain specific ISA which could support more than ten different deep learning algorithms TPU also reveals five key instructions from the CISC style ISA Hybrid DLPs edit Hybrid DLPs emerge for DNN inference and training acceleration because of their high efficiency Processing in memory PIM architectures are one most important type of hybrid DLP The key design concept of PIM is to bridge the gap between computing and memory with the following manners 1 Moving computation components into memory cells controllers or memory chips to alleviate the memory wall issue 79 83 84 Such architectures significantly shorten data paths and leverage much higher internal bandwidth hence resulting in attractive performance improvement 2 Build high efficient DNN engines by adopting computational devices In 2013 HP Lab demonstrated the astonishing capability of adopting ReRAM crossbar structure for computing 85 Inspiring by this work tremendous work are proposed to explore the new architecture and system design based on ReRAM 78 86 87 79 phase change memory 83 88 89 etc Benchmarks editBenchmarks such as MLPerf and others may be used to evaluate the performance of AI accelerators 90 Table 2 lists several typical benchmarks for AI accelerators Table 2 Benchmarks Year NN Benchmark Affiliations of microbenchmarks of component benchmarks of application benchmarks2012 BenchNN ICT CAS N A 12 N A2016 Fathom Harvard N A 8 N A2017 BenchIP ICT CAS 12 11 N A2017 DAWNBench Stanford 8 N A N A2017 DeepBench Baidu 4 N A N A2018 MLPerf Harvard Intel and Google etc N A 7 N A2019 AIBench ICT CAS and Alibaba etc 12 16 22019 NNBench X UCSB N A 10 N APotential applications editAgricultural robots for example herbicide free weed control 91 Autonomous vehicles Nvidia has targeted their Drive PX series boards at this application 92 Computer aided diagnosis Industrial robots increasing the range of tasks that can be automated by adding adaptability to variable situations Machine translation Military robots Natural language processing Search engines increasing the energy efficiency of data centers and the ability to use increasingly advanced queries Unmanned aerial vehicles e g navigation systems e g the Movidius Myriad 2 has been demonstrated successfully guiding autonomous drones 93 Voice user interface e g in mobile phones a target for Qualcomm Zeroth 94 See also editCognitive computer Neuromorphic engineering Optical neural network Physical neural network Cerebras SystemsReferences edit Intel unveils Movidius Compute Stick USB AI Accelerator July 21 2017 Archived from the original on August 11 2017 Retrieved August 11 2017 Inspurs unveils GX4 AI Accelerator June 21 2017 Wiggers Kyle November 6 2019 2019 Neural Magic raises 15 million to boost AI inferencing speed on off the shelf processors archived from the original on March 6 2020 retrieved March 14 2020 Google Designing AI Processors Google using its own AI accelerators Moss Sebastian March 23 2022 Nvidia reveals new Hopper H100 GPU with 80 billion transistors Data Center Dynamics Retrieved January 30 2024 Deploying Transformers on the Apple Neural Engine Apple Machine Learning Research Retrieved August 24 2023 HUAWEI Reveals the Future of Mobile AI at IFA Jouppi Norman P et al June 24 2017 In Datacenter Performance Analysis of a Tensor Processing Unit ACM SIGARCH Computer Architecture News 45 2 1 12 arXiv 1704 04760 doi 10 1145 3140659 3080246 Patel Dylan Nishball Daniel Xie Myron November 9 2023 Nvidia s New China AI Chips Circumvent US Restrictions SemiAnalysis Retrieved February 7 2024 Dvorak J C May 29 1990 Inside Track PC Magazine Retrieved December 26 2023 convolutional neural network demo from 1993 featuring DSP32 accelerator YouTube a b Zhang Wei 1988 Shift invariant pattern recognition neural network and its optical architecture Proceedings of Annual Conference of the Japan Society of Applied Physics a b Zhang Wei 1990 Parallel distributed processing model with local space invariant interconnections and its optical architecture Applied Optics 29 32 4790 7 Bibcode 1990ApOpt 29 4790Z doi 10 1364 AO 29 004790 PMID 20577468 Asanovic K Beck J Feldman J Morgan N Wawrzynek J January 1994 Designing a connectionist network supercomputer International Journal of Neural Systems ResearchGate 4 4 317 26 doi 10 1142 S0129065793000250 Retrieved December 26 2023 The end of general purpose computers not YouTube Ramacher U Raab W Hachmann J A U Beichter J Bruls N Wesseling M Sicheneder E Glass J Wurz A Manner R 1995 Proceedings of 9th International Parallel Processing Symposium pp 774 781 CiteSeerX 10 1 1 27 6410 doi 10 1109 IPPS 1995 395862 ISBN 978 0 8186 7074 9 S2CID 16364797 a b Gschwind M Salapura V Maischberger O February 1995 Space Efficient Neural Net Implementation ResearchGate Retrieved December 26 2023 a b Gschwind M Salapura V Maischberger O 1996 A Generic Building Block for Hopfield Neural Networks with On Chip Learning 1996 IEEE International Symposium on Circuits and Systems Circuits and Systems Connecting the World ISCAS 96 pp 49 52 doi 10 1109 ISCAS 1996 598474 ISBN 0 7803 3073 0 S2CID 17630664 a b c d Chen Tianshi Du Zidong Sun Ninghui Wang Jia Wu Chengyong Chen Yunji Temam Olivier April 5 2014 DianNao ACM SIGARCH Computer Architecture News 42 1 269 284 doi 10 1145 2654822 2541967 ISSN 0163 5964 a b c Chen Yunji Luo Tao Liu Shaoli Zhang Shijin He Liqiang Wang Jia Li Ling Chen Tianshi Xu Zhiwei Sun Ninghui Temam Olivier December 2014 DaDianNao A Machine Learning Supercomputer 2014 47th Annual IEEE ACM International Symposium on Microarchitecture IEEE pp 609 622 doi 10 1109 micro 2014 58 ISBN 978 1 4799 6998 2 S2CID 6838992 a b c Du Zidong Fasthuber Robert Chen Tianshi Ienne Paolo Li Ling Luo Tao Feng Xiaobing Chen Yunji Temam Olivier January 4 2016 ShiDianNao ACM SIGARCH Computer Architecture News 43 3S 92 104 doi 10 1145 2872887 2750389 ISSN 0163 5964 a b c Liu Daofu Chen Tianshi Liu Shaoli Zhou Jinhong Zhou Shengyuan Teman Olivier Feng Xiaobing Zhou Xuehai Chen Yunji May 29 2015 PuDianNao ACM SIGARCH Computer Architecture News 43 1 369 381 doi 10 1145 2786763 2694358 ISSN 0163 5964 Chen Yunji Chen Tianshi Xu Zhiwei Sun Ninghui Temam Olivier October 28 2016 DianNao family Communications of the ACM 59 11 105 112 doi 10 1145 2996864 ISSN 0001 0782 S2CID 207243998 Qualcomm Helps Make Your Mobile Devices Smarter With New Snapdragon Machine Learning Software Development Kit Qualcomm Rubin Ben Fox Qualcomm s Zeroth platform could make your smartphone much smarter CNET Retrieved September 28 2021 Gschwind Michael Hofstee H Peter Flachs Brian Hopkins Martin Watanabe Yukio Yamazaki Takeshi 2006 Synergistic Processing in Cell s Multicore Architecture IEEE Micro 26 2 10 24 doi 10 1109 MM 2006 41 S2CID 17834015 De Fabritiis G 2007 Performance of Cell processor for biomolecular simulations Computer Physics Communications 176 11 12 660 664 arXiv physics 0611201 Bibcode 2007CoPhC 176 660D doi 10 1016 j cpc 2007 02 107 S2CID 13871063 Video Processing and Retrieval on Cell architecture CiteSeerX 10 1 1 138 5133 Benthin Carsten Wald Ingo Scherbaum Michael Friedrich Heiko 2006 2006 IEEE Symposium on Interactive Ray Tracing pp 15 23 CiteSeerX 10 1 1 67 8982 doi 10 1109 RT 2006 280210 ISBN 978 1 4244 0693 7 S2CID 1198101 Development of an artificial neural network on a heterogeneous multicore architecture to predict a successful weight loss in obese individuals PDF Archived from the original PDF on August 30 2017 Retrieved November 14 2017 Kwon Bomjun Choi Taiho Chung Heejin Kim Geonho 2008 2008 5th IEEE Consumer Communications and Networking Conference pp 1030 1034 doi 10 1109 ccnc08 2007 235 ISBN 978 1 4244 1457 4 S2CID 14429828 Duan Rubing Strey Alfred 2008 Euro Par 2008 Parallel Processing Lecture Notes in Computer Science Vol 5168 pp 665 675 doi 10 1007 978 3 540 85451 7 71 ISBN 978 3 540 85450 0 Improving the performance of video with AVX February 8 2012 Chellapilla K Sidd Puri Simard P October 23 2006 High Performance Convolutional Neural Networks for Document Processing 10th International Workshop on Frontiers in Handwriting Recognition Retrieved December 23 2023 Krizhevsky A Sutskever I Hinton G E May 24 2017 ImageNet Classification with Deep Convolutional Neural Networks Communications of the ACM 60 6 84 90 doi 10 1145 3065386 Retrieved December 23 2023 Krizhevsky Alex Sutskever Ilya Hinton Geoffrey E May 24 2017 ImageNet classification with deep convolutional neural networks Communications of the ACM 60 6 84 90 doi 10 1145 3065386 Summit Oak Ridge National Laboratory s 200 petaflop supercomputer United States Department of Energy 2024 Retrieved January 8 2024 Roe R May 17 2023 Nvidia in the Driver s Seat for Deep Learning insideHPC Retrieved December 23 2023 Bohn D January 5 2016 Nvidia announces supercomputer for self driving cars at CES 2016 Vox Media Retrieved December 23 2023 A Survey on Optimized Implementation of Deep Learning Models on the NVIDIA Jetson Platform 2019 a b Harris Mark May 11 2017 CUDA 9 Features Revealed Volta Cooperative Groups and More Retrieved August 12 2017 Sefat Md Syadus Aslan Semih Kellington Jeffrey W Qasem Apan August 2019 Accelerating HotSpots in Deep Neural Networks on a CAPI Based FPGA 2019 IEEE 21st International Conference on High Performance Computing and Communications IEEE 17th International Conference on Smart City IEEE 5th International Conference on Data Science and Systems HPCC SmartCity DSS pp 248 256 doi 10 1109 HPCC SmartCity DSS 2019 00048 ISBN 978 1 7281 2058 4 S2CID 203656070 FPGA Based Deep Learning Accelerators Take on ASICs The Next Platform August 23 2016 Retrieved September 7 2016 Microsoft unveils Project Brainwave for real time AI Microsoft August 22 2017 Google boosts machine learning with its Tensor Processing Unit May 19 2016 Retrieved September 13 2016 Chip could bring deep learning to mobile devices www sciencedaily com February 3 2016 Retrieved September 13 2016 Deep Learning with Limited Numerical Precision PDF Rastegari Mohammad Ordonez Vicente Redmon Joseph Farhadi Ali 2016 XNOR Net ImageNet Classification Using Binary Convolutional Neural Networks arXiv 1603 05279 cs CV Khari Johnson May 23 2018 Intel unveils Nervana Neural Net L 1000 for accelerated AI training VentureBeat Retrieved May 23 2018 Intel will be extending bfloat16 support across our AI product lines including Intel Xeon processors and Intel FPGAs Michael Feldman May 23 2018 Intel Lays Out New Roadmap for AI Portfolio TOP500 Supercomputer Sites Retrieved May 23 2018 Intel plans to support this format across all their AI products including the Xeon and FPGA lines Lucian Armasu May 23 2018 Intel To Launch Spring Crest Its First Neural Network Processor In 2019 Tom s Hardware Retrieved May 23 2018 Intel said that the NNP L1000 would also support bfloat16 a numerical format that s being adopted by all the ML industry players for neural networks The company will also support bfloat16 in its FPGAs Xeons and other ML products The Nervana NNP L1000 is scheduled for release in 2019 Available TensorFlow Ops Cloud TPU Google Cloud Google Cloud Retrieved May 23 2018 This page lists the TensorFlow Python APIs and graph operators available on Cloud TPU Elmar Haussmann April 26 2018 Comparing Google s TPUv2 against Nvidia s V100 on ResNet 50 RiseML Blog Archived from the original on April 26 2018 Retrieved May 23 2018 For the Cloud TPU Google recommended we use the bfloat16 implementation from the official TPU repository with TensorFlow 1 7 0 Both the TPU and GPU implementations make use of mixed precision computation on the respective architecture and store most tensors with half precision Tensorflow Authors February 28 2018 ResNet 50 using BFloat16 on TPU Google Retrieved May 23 2018 permanent dead link Joshua V Dillon Ian Langmore Dustin Tran Eugene Brevdo Srinivas Vasudevan Dave Moore Brian Patton Alex Alemi Matt Hoffman Rif A Saurous November 28 2017 TensorFlow Distributions Report arXiv 1711 10604 Bibcode 2017arXiv171110604D Accessed May 23 2018 All operations in TensorFlow Distributions are numerically stable across half single and double floating point precisions as TensorFlow dtypes tf bfloat16 truncated floating point tf float16 tf float32 tf float64 Class constructors have a validate args flag for numerical asserts Google Reveals a Powerful New AI Chip and Supercomputer MIT Technology Review Retrieved July 27 2021 What to Expect From Apple s Neural Engine in the A11 Bionic SoC ExtremeTech www extremetech com Retrieved July 27 2021 Facebook has a new job posting calling for chip designers April 19 2018 permanent dead link Facebook joins Amazon and Google in AI chip race Financial Times February 18 2019 Amadeo Ron May 11 2021 Samsung and AMD will reportedly take on Apple s M1 SoC later this year Ars Technica Retrieved July 28 2021 Smith Ryan The AI Race Expands Qualcomm Reveals Cloud AI 100 Family of Datacenter AI Inference Accelerators for 2020 www anandtech com Retrieved September 28 2021 Woodie Alex November 1 2021 Cerebras Hits the Accelerator for Deep Learning Workloads Datanami Retrieved August 3 2022 Cerebras launches new AI supercomputing processor with 2 6 trillion transistors VentureBeat April 20 2021 Retrieved August 3 2022 Abu Sebastian Tomas Tuma Nikolaos Papandreou Manuel Le Gallo Lukas Kull Thomas Parnell Evangelos Eleftheriou 2017 Temporal correlation detection using computational phase change memory Nature Communications 8 1 1115 arXiv 1706 00511 Bibcode 2017NatCo 8 1115S doi 10 1038 s41467 017 01481 9 PMC 5653661 PMID 29062022 A new brain inspired architecture could improve how computers handle data and advance AI American Institute of Physics October 3 2018 Retrieved October 5 2018 Carlos Rios Nathan Youngblood Zengguang Cheng Manuel Le Gallo Wolfram H P Pernice C David Wright Abu Sebastian Harish Bhaskaran 2018 In memory computing on a photonic platform Science Advances 5 2 eaau5759 arXiv 1801 06228 Bibcode 2019SciA 5 5759R doi 10 1126 sciadv aau5759 PMC 6377270 PMID 30793028 S2CID 7637801 Zhong Sun Giacomo Pedretti Elia Ambrosi Alessandro Bricalli Wei Wang Daniele Ielmini 2019 Solving matrix equations in one step with cross point resistive arrays Proceedings of the National Academy of Sciences 116 10 4123 4128 Bibcode 2019PNAS 116 4123S doi 10 1073 pnas 1815682116 PMC 6410822 PMID 30782810 a b Marega Guilherme Migliato Zhao Yanfei Avsar Ahmet Wang Zhenyu Tripati Mukesh Radenovic Aleksandra Kis Anras 2020 Logic in memory based on an atomically thin semiconductor Nature 587 2 72 77 Bibcode 2020Natur 587 72M doi 10 1038 s41586 020 2861 0 PMC 7116757 PMID 33149289 a b c Feldmann J Youngblood N Karpov M et al 2021 Parallel convolutional processing using an integrated photonic tensor Nature 589 2 52 58 arXiv 2002 00281 doi 10 1038 s41586 020 03070 1 PMID 33408373 S2CID 211010976 NVIDIA launches the World s First Graphics Processing Unit the GeForce 256 Archived from the original on February 27 2016 Intel to Bring a VPU Processor Unit to 14th Gen Meteor Lake Chips PCMAG a b c Chen Yu Hsin Emer Joel Sze Vivienne 2017 Eyeriss A Spatial Architecture for Energy Efficient Dataflow for Convolutional Neural Networks IEEE Micro 1 doi 10 1109 mm 2017 265085944 hdl 1721 1 102369 ISSN 0272 1732 a b Han Song Liu Xingyu Mao Huizi Pu Jing Pedram Ardavan Horowitz Mark A Dally William J February 3 2016 EIE Efficient Inference Engine on Compressed Deep Neural Network OCLC 1106232247 Reagen Brandon Whatmough Paul Adolf Robert Rama Saketh Lee Hyunkwang Lee Sae Kyu Hernandez Lobato Jose Miguel Wei Gu Yeon Brooks David June 2016 Minerva Enabling Low Power Highly Accurate Deep Neural Network Accelerators 2016 ACM IEEE 43rd Annual International Symposium on Computer Architecture ISCA Seoul IEEE pp 267 278 doi 10 1109 ISCA 2016 32 ISBN 978 1 4673 8947 1 Judd Patrick Albericio Jorge Moshovos Andreas January 1 2017 Stripes Bit Serial Deep Neural Network Computing IEEE Computer Architecture Letters 16 1 80 83 doi 10 1109 lca 2016 2597140 ISSN 1556 6056 S2CID 3784424 a b c Jouppi N Young C Patil N Patterson D June 24 2017 In Datacenter Performance Analysis of a Tensor Processing Unit Association for Computing Machinery pp 1 12 doi 10 1145 3079856 3080246 ISBN 9781450348928 S2CID 4202768 Retrieved January 8 2024 MLU 100 intelligence accelerator card in Japanese Cambricon 2024 Retrieved January 8 2024 a b Chi Ping Li Shuangchen Xu Cong Zhang Tao Zhao Jishen Liu Yongpan Wang Yu Xie Yuan June 2016 PRIME A Novel Processing in Memory Architecture for Neural Network Computation in ReRAM Based Main Memory 2016 ACM IEEE 43rd Annual International Symposium on Computer Architecture ISCA IEEE pp 27 39 doi 10 1109 isca 2016 13 ISBN 978 1 4673 8947 1 a b c Song Linghao Qian Xuehai Li Hai Chen Yiran February 2017 PipeLayer A Pipelined ReRAM Based Accelerator for Deep Learning 2017 IEEE International Symposium on High Performance Computer Architecture HPCA IEEE pp 541 552 doi 10 1109 hpca 2017 55 ISBN 978 1 5090 4985 1 S2CID 15281419 Shin Dongjoo Lee Jinmook Lee Jinsu Yoo Hoi Jun 2017 14 2 DNPU An 8 1TOPS W reconfigurable CNN RNN processor for general purpose deep neural networks 2017 IEEE International Solid State Circuits Conference ISSCC pp 240 241 doi 10 1109 ISSCC 2017 7870350 ISBN 978 1 5090 3758 2 S2CID 206998709 Retrieved August 24 2023 Lee Jinmook Kim Changhyeon Kang Sanghoon Shin Dongjoo Kim Sangyeob Yoo Hoi Jun 2018 UNPU A 50 6TOPS W unified deep neural network accelerator with 1b to 16b fully variable weight bit precision 2018 IEEE International Solid State Circuits Conference ISSCC pp 218 220 doi 10 1109 ISSCC 2018 8310262 ISBN 978 1 5090 4940 0 S2CID 3861747 Retrieved November 30 2023 Liu Shaoli Du Zidong Tao Jinhua Han Dong Luo Tao Xie Yuan Chen Yunji Chen Tianshi June 2016 Cambricon An Instruction Set Architecture for Neural Networks 2016 ACM IEEE 43rd Annual International Symposium on Computer Architecture ISCA IEEE pp 393 405 doi 10 1109 isca 2016 42 ISBN 978 1 4673 8947 1 a b Ambrogio Stefano Narayanan Pritish Tsai Hsinyu Shelby Robert M Boybat Irem di Nolfo Carmelo Sidler Severin Giordano Massimo Bodini Martina Farinha Nathan C P Killeen Benjamin June 2018 Equivalent accuracy accelerated neural network training using analogue memory Nature 558 7708 60 67 Bibcode 2018Natur 558 60A doi 10 1038 s41586 018 0180 5 ISSN 0028 0836 PMID 29875487 S2CID 46956938 Chen Wei Hao Lin Wen Jang Lai Li Ya Li Shuangchen Hsu Chien Hua Lin Huan Ting Lee Heng Yuan Su Jian Wei Xie Yuan Sheu Shyh Shyuan Chang Meng Fan December 2017 A 16Mb dual mode ReRAM macro with sub 14ns computing in memory and memory functions enabled by self write termination scheme 2017 IEEE International Electron Devices Meeting IEDM IEEE pp 28 2 1 28 2 4 doi 10 1109 iedm 2017 8268468 ISBN 978 1 5386 3559 9 S2CID 19556846 Yang J Joshua Strukov Dmitri B Stewart Duncan R January 2013 Memristive devices for computing Nature Nanotechnology 8 1 13 24 Bibcode 2013NatNa 8 13Y doi 10 1038 nnano 2012 240 ISSN 1748 3395 PMID 23269430 Shafiee Ali Nag Anirban Muralimanohar Naveen Balasubramonian Rajeev Strachan John Paul Hu Miao Williams R Stanley Srikumar Vivek October 12 2016 ISAAC ACM SIGARCH Computer Architecture News 44 3 14 26 doi 10 1145 3007787 3001139 ISSN 0163 5964 S2CID 6329628 Ji Yu Zhang Youyang Xie Xinfeng Li Shuangchen Wang Peiqi Hu Xing Zhang Youhui Xie Yuan January 27 2019 FPSA A Full System Stack Solution for Reconfigurable ReRAM based NN Accelerator Architecture OCLC 1106329050 a href Template Cite book html title Template Cite book cite book a CS1 maint multiple names authors list link Nandakumar S R Boybat Irem Joshi Vinay Piveteau Christophe Le Gallo Manuel Rajendran Bipin Sebastian Abu Eleftheriou Evangelos November 2019 Phase Change Memory Models for Deep Learning Training and Inference 2019 26th IEEE International Conference on Electronics Circuits and Systems ICECS IEEE pp 727 730 doi 10 1109 icecs46596 2019 8964852 ISBN 978 1 7281 0996 1 S2CID 210930121 Joshi Vinay Le Gallo Manuel Haefeli Simon Boybat Irem Nandakumar S R Piveteau Christophe Dazzi Martino Rajendran Bipin Sebastian Abu Eleftheriou Evangelos May 18 2020 Accurate deep neural network inference using computational phase change memory Nature Communications 11 1 2473 arXiv 1906 03138 Bibcode 2020NatCo 11 2473J doi 10 1038 s41467 020 16108 9 ISSN 2041 1723 PMC 7235046 PMID 32424184 Nvidia claims record performance for Hopper MLPerf debut Development of a machine vision system for weed control using precision chemical application PDF University of Florida CiteSeerX 10 1 1 7 342 Archived from the original PDF on June 23 2010 Self Driving Cars Technology amp Solutions from NVIDIA Automotive NVIDIA movidius powers worlds most intelligent drone March 16 2016 Qualcomm Research brings server class machine learning to everyday devices making them smarter VIDEO October 2015 External links editNvidia Puts The Accelerator To The Metal With Pascal htm The Next Platform Eyeriss Project MIT https alphaics ai Retrieved from https en wikipedia org w index php title AI accelerator amp oldid 1210006888, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.