BLIS (software)

Summary

In scientific computing, BLIS (BLAS-like Library Instantiation Software)[2][3][4][5] is an open-source framework for implementing a superset of BLAS (Basic Linear Algebra Subprograms) functionality for specific processor types that was recently awarded the J. H. Wilkinson Prize for Numerical Software.[6] It exposes that functionality through two traditional Application Programming Interfaces (APIs): the BLAS interface and the CBLAS interface. BLIS also includes two APIs native to the framework: a typed (BLAS-like) API and an object API. These native interfaces provide access to BLAS-like functionality that is not supported by, but closely related to, operations found in the BLAS (and CBLAS). The framework is developed and supported by the Science of High-Performance Computing (SHPC) group of the Oden Institute for Computational Engineering and Sciences at The University of Texas at Austin and the Matthews Research Group at Southern Methodist University.

BLIS
Original author(s)Science of High-Performance Computing (SHPC) group, UT-Austin
Developer(s)Field Van Zee and Devin Matthews
Initial releaseNovember 9, 2013; 10 years ago (2013-11-09)
Stable release
0.9.0 / April 5, 2022; 2 years ago (2022-04-05)[1]
Repository
  • www.github.com/flame/blis Edit this at Wikidata
Operating systemLinux
Microsoft Windows
macOS
FreeBSD
Platformx86-64
ARM
ARM64
...
TypeLinear algebra library; implementation of BLAS
Licensenew/modified/3-clause BSD License
Websitewww.github.com/flame/blis Edit this on Wikidata

BLIS yields high performance on many current CPU microarchitectures in both single-threaded and multithreaded modes of execution.[7] BLIS also offers competitive performance for some cases of matrix multiplication in which one or more matrix operands are unusually skinny and/or small.[8]

The framework achieves high performance by employing specialized kernels (typically written in GNU extended inline assembly syntax) along with cache and register blocking through matrix operands. BLIS also works on processors for which custom kernels have not yet been written; in those cases, the framework relies upon portable kernel implementations that perform at a lower rate of computation.

BLIS is sometimes described as a refactoring of GotoBLAS2, which was created by Kazushige Goto at the Texas Advanced Computing Center.[9]

See also edit

References edit

  1. ^ Releases · flame/blis – GitHub
  2. ^ Van Zee, Field; van de Geijn, Robert (2015). "BLIS: A Framework for Rapidly Instantiating BLAS Functionality". ACM Transactions on Mathematical Software. 41 (3): 1–33. doi:10.1145/2764454.
  3. ^ Van Zee, Field; Smith, Tyler; Igual, Francisco; Smelyanskiy, Mikhail; Zhang, Xiangyi; Kistler, Michael; Austel, Vernon; Gunnels, John; Low, Tze Meng; Marker, Bryan; Killough, Lee; van de Geijn, Robert (2016). "The BLIS Framework: Experiments in Portability". ACM Transactions on Mathematical Software. 42 (2): 1–19. doi:10.1145/2755561.
  4. ^ Smith, Tyler M.; van de Geijn, Robert; Smelyanskiy, Mikhail; Hammond, Jeff R.; Van Zee, Field G. (2014). "Anatomy of High-Performance Many-Threaded Matrix Multiplication". 2014 IEEE 28th International Parallel and Distributed Processing Symposium. pp. 1049–1059. doi:10.1109/IPDPS.2014.110. ISBN 978-1-4799-3800-1.
  5. ^ Low, Tze Meng; Igual, Francisco; Smith, Tyler; Quintana, Enrique (2016). "Analytical Modeling is Enough for High-Performance BLIS". ACM Transactions on Mathematical Software. 43 (2): 1–18. doi:10.1145/2925987. hdl:10234/163618.
  6. ^ James H. Wilkinson Prize for Numerical Software, SIAM · Prizes & Recognition · Major Prizes & Lectures.
  7. ^ Performance.md, flame/blis on GitHub.
  8. ^ PerformanceSmall.md, flame/blis on GitHub.
  9. ^ Goto, Kazushige; van de Geijn, Robert A. (2008). "Anatomy of high-performance matrix multiplication". ACM Transactions on Mathematical Software. 34 (3): 1–25. doi:10.1145/1356052.1356053.

External links edit

  • Official website