Math Cores

Matrix algebra is the manipulation of a matrix, a rectangular array of numbers, or several matrices. Matrices can be added and subtracted entry wise, and multiplied according to a rule corresponding to composition of linear transformations. Matrices find many applications. Physics makes use of them in various domains, for example in geometrical optics and matrix mechanics. The latter also led to studying in more detail matrices with an infinite number of rows and columns.

Examples of Matrix Manipulation:

There are numerous examples of how matrices are used to calculate data. Matrices encoding distances of knot points in a graph, such as cities connected by roads, are used in graph theory, and computer graphics use matrices to encode projections of three-dimensional space onto a two-dimensional screen. Matrix calculus generalizes classical analytical notions such as derivatives of functions or exponentials to matrices. The latter is a recurring need in solving ordinary differential equations.

Concurrent EDA has the capability to rapidly create matrix/math processing cores that operate at 1 to 100 billion operations per second. The following are completed cores that implement matrix/math processing functions and illustrate the types of cores that Concurrent EDA can create using our automation tools.

Core Name Description Performance and Area
Boolean Matrix Multiply A Boolean matrix core designed for matrix multiplication and optimized for matrices with elements up to 1024x1024 elements.

Reference: Logical Matrix Manipulation
  • 85 Giga-ops/sec
  • 200 MHz
  • 4,337 LUTs
Matrix Multiply A matrix core optimized for the multiplication of matrices.

Reference: Matrix Multiplication
  • 9.4 Giga-ops/sec
  • 200 MHz
  • 3,721 LUTs

Contact

Telephone
412.687.8800
Address
5001 Baum Blvd Ste 640
Pittsburgh PA 15213
Email
info@concurrenteda.com