2024-04-14T08:17:30Zhttps://ipsj.ixsq.nii.ac.jp/ej/?action=repository_oaipmhoai:ipsj.ixsq.nii.ac.jp:000182042020-10-27T05:03:34Z00934:01119:01124:01125
Level-3 BLAS and LU Factorization on a Matrix ProcessorLevel-3 BLAS and LU Factorization on a Matrix Processoreng数値計算http://id.nii.ac.jp/1001/00018204/Articlehttps://ipsj.ixsq.nii.ac.jp/ej/?action=repository_action_common_download&item_id=18204&item_no=1&attribute_id=1&file_no=1Copyright (c) 2008 by the Information Processing Society of JapanDepartment of Information Systems The University of AizuDepartment of Information Systems The University of AizuAhmedS.ZekriStanislavG.SedukhinAs increasing clock frequency approaches its physical limits a good approach to enhance performance is to increase parallelism by integrating more cores as coprocessors to generalpurpose processors in order to handle the different workloads in scientific engineering and signal processing applications. In this paper we propose a many-core matrix processor model consisting of a scalar unit augmented with b×b simple cores tightly connected in a 2D torus matrix unit to accelerate matrix-based kernels. Data load/store is overlapped with computing using a decoupled data access unit that moves b×b blocks of data between memory and the two scalar and matrix processing units. The operation of the matrix unit is mainly processing fine-grained b×b matrix multiply-add (MMA) operations. We formulate the data alignment operations including matrix transposition and skewing as MMA operations in order to overlap them with data load/store. Two fundamental linear algebra algorithms are designed and analytically evaluated on the proposed matrix processor: the Level-3 BLAS kernel GEMM and the LU factorization with partial pivoting the main step in solving linear systems of equations.For the GEMM kernel the maximum speed of computing measured in FLOPs/cycle is approached for different matrix sizes n and block sizes b. The speed of the LU factorization for relatively large values of n ranges from around 50?90% of the maximum speed depending on the model parameters. Overall the analytical results show the merits of using the matrix unit for accelerating the matrix-based applications.As increasing clock frequency approaches its physical limits, a good approach to enhance performance is to increase parallelism by integrating more cores as coprocessors to generalpurpose processors in order to handle the different workloads in scientific, engineering, and signal processing applications. In this paper, we propose a many-core matrix processor model consisting of a scalar unit augmented with b×b simple cores tightly connected in a 2D torus matrix unit to accelerate matrix-based kernels. Data load/store is overlapped with computing using a decoupled data access unit that moves b×b blocks of data between memory and the two scalar and matrix processing units. The operation of the matrix unit is mainly processing fine-grained b×b matrix multiply-add (MMA) operations. We formulate the data alignment operations including matrix transposition and skewing as MMA operations in order to overlap them with data load/store. Two fundamental linear algebra algorithms are designed and analytically evaluated on the proposed matrix processor: the Level-3 BLAS kernel, GEMM, and the LU factorization with partial pivoting, the main step in solving linear systems of equations.For the GEMM kernel, the maximum speed of computing measured in FLOPs/cycle is approached for different matrix sizes, n, and block sizes, b. The speed of the LU factorization for relatively large values of n ranges from around 50窶骭90% of the maximum speed depending on the model parameters. Overall, the analytical results show the merits of using the matrix unit for accelerating the matrix-based applications.AA11833852情報処理学会論文誌コンピューティングシステム（ACS）49SIG2(ACS21)37522008-03-151882-78292009-06-30