MatGetMultiProcBlock#
Create multiple ‘parallel submatrices’ from a given Mat
. Each submatrix can span multiple procs.
Synopsis#
#include "petscmat.h"
PetscErrorCode MatGetMultiProcBlock(Mat mat, MPI_Comm subComm, MatReuse scall, Mat *subMat)
Collective on mat
Input Parameters#
mat - the matrix
subcomm - the subcommunicator obtained by MPI_Com_split(comm)
scall - either
MAT_INITIAL_MATRIX
orMAT_REUSE_MATRIX
Output Parameter#
subMat - ‘parallel submatrices each spans a given subcomm
Notes#
The submatrix partition across processors is dictated by ‘subComm’ a
communicator obtained by MPI_comm_split() or via PetscSubcommCreate()
. The subComm
is not restriced to be grouped with consecutive original ranks.
Due the MPI_Comm_split() usage, the parallel layout of the submatrices
map directly to the layout of the original matrix [wrt the local
row,col partitioning]. So the original ‘DiagonalMat’ naturally maps
into the ‘DiagonalMat’ of the subMat, hence it is used directly from
the subMat. However the offDiagMat looses some columns - and this is
reconstructed with MatSetValues()
This is used by PCBJACOBI
when a single block spans multiple MPI ranks
See Also#
MatCreateRedundantMatrix()
, MatCreateSubMatrices()
, PCBJACOBI
Level#
advanced
Location#
Implementations#
MatGetMultiProcBlock_MPIAIJ in src/mat/impls/aij/mpi/mpb_aij.c
MatGetMultiProcBlock_SeqAIJ in src/mat/impls/aij/seq/aij.c
MatGetMultiProcBlock_MPIBAIJ in src/mat/impls/baij/mpi/mpb_baij.c
Index of all Mat routines
Table of Contents for all manual pages
Index of all manual pages