Udsalget slutter om
Udvidet returret til d. 31. januar 2025

Data Orchestration in Deep Learning Accelerators - Tushar Krishna - Bog

Bag om Data Orchestration in Deep Learning Accelerators

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

Vis mere
  • Sprog:
  • Engelsk
  • ISBN:
  • 9783031006395
  • Indbinding:
  • Paperback
  • Sideantal:
  • 168
  • Udgivet:
  • 18. august 2020
  • Størrelse:
  • 191x10x235 mm.
  • Vægt:
  • 327 g.
  • 8-11 hverdage.
  • 12. december 2024
På lager
Forlænget returret til d. 31. januar 2025

Normalpris

  • BLACK FRIDAY
    : :

Medlemspris

Prøv i 30 dage for 45 kr.
Herefter fra 79 kr./md. Ingen binding.

Beskrivelse af Data Orchestration in Deep Learning Accelerators

This Synthesis Lecture focuses on techniques for efficient data orchestration within DNN accelerators. The End of Moore's Law, coupled with the increasing growth in deep learning and other AI applications has led to the emergence of custom Deep Neural Network (DNN) accelerators for energy-efficient inference on edge devices. Modern DNNs have millions of hyper parameters and involve billions of computations; this necessitates extensive data movement from memory to on-chip processing engines. It is well known that the cost of data movement today surpasses the cost of the actual computation; therefore, DNN accelerators require careful orchestration of data across on-chip compute, network, and memory elements to minimize the number of accesses to external DRAM. The book covers DNN dataflows, data reuse, buffer hierarchies, networks-on-chip, and automated design-space exploration. It concludes with data orchestration challenges with compressed and sparse DNNs and future trends. The target audience is students, engineers, and researchers interested in designing high-performance and low-energy accelerators for DNN inference.

Brugerbedømmelser af Data Orchestration in Deep Learning Accelerators



Gør som tusindvis af andre bogelskere

Tilmeld dig nyhedsbrevet og få gode tilbud og inspiration til din næste læsning.