Go back to the previous page
#GPU

Tensor kernels

Tensor cores are specialized hardware blocks of a graphics processor. They were first introduced by NVIDIA in the Volta architecture. Due to their high performance and process acceleration, tensor cores have become an important part of various forms of data processing and machine learning algorithms.

Mechanics of Tensor Cores Kernels perform mixed precision matrix multiplication and accumulation. Simply put, this means performing multiple computations simultaneously. This capability is an advantage for artificial intelligence (AI), deep learning and neural network training, where fast and efficient data processing is paramount.

Comparison of tensor kernels and traditional kernels

  • Parallel processing. In contrast to traditional kernels, which perform one operation at a time, tensor kernels can perform hundreds of operations simultaneously.
  • Mixed-precision calculations. Tensor cores can handle mixed-precision calculations. This ability is not present in traditional kernels.
  • Optimized for AI. Tensor cores are specifically designed to accelerate AI and deep learning tasks unlike traditional cores.

Application of Tensor Kernels. Accelerating the learning and processing of complex neural networks by speeding up matrix operations has contributed to advances in various areas of artificial intelligence, including natural language processing, image recognition, and autonomous vehicles.

Rate this article
Our website uses cookies to improve your experience