COMBINATORIAL_BLAS  1.6
par Namespace Reference

Collection of Generic Parallel Functions: Sorting, Partitioning, Searching,... More...

Classes

class  Mpi_datatype
 An abstract class used for communicating messages using user-defined datatypes. The user must implement the static member function "value()" that returns the MPI_Datatype corresponding to this user-defined datatype. More...
 
class  Mpi_datatype< bool >
 A template specialization of the abstract class Mpi_datatype. This can be used for communicating messages of type "bool". More...
 
class  Mpi_datatype< IndexHolder< T > >
 
class  Mpi_datatype< std::complex< T > >
 
class  Mpi_pairtype
 

Functions

template<typename T >
int Mpi_Isend (T *buf, int count, int dest, int tag, MPI_Comm comm, MPI_Request *request)
 
template<typename T >
int Mpi_Issend (T *buf, int count, int dest, int tag, MPI_Comm comm, MPI_Request *request)
 
template<typename T >
int Mpi_Recv (T *buf, int count, int source, int tag, MPI_Comm comm, MPI_Status *status)
 
template<typename T >
int Mpi_Irecv (T *buf, int count, int source, int tag, MPI_Comm comm, MPI_Request *request)
 
template<typename T >
int Mpi_Gather (T *sendBuffer, T *recvBuffer, int count, int root, MPI_Comm comm)
 
template<typename T , typename S >
int Mpi_Sendrecv (T *sendBuf, int sendCount, int dest, int sendTag, S *recvBuf, int recvCount, int source, int recvTag, MPI_Comm comm, MPI_Status *status)
 
template<typename T >
int Mpi_Bcast (T *buffer, int count, int root, MPI_Comm comm)
 
template<typename T >
int Mpi_Scan (T *sendbuf, T *recvbuf, int count, MPI_Op op, MPI_Comm comm)
 
template<typename T >
int Mpi_Reduce (T *sendbuf, T *recvbuf, int count, MPI_Op op, int root, MPI_Comm comm)
 
template<typename T >
int Mpi_Allreduce (T *sendbuf, T *recvbuf, int count, MPI_Op op, MPI_Comm comm)
 
template<typename T >
int Mpi_Alltoall (T *sendbuf, T *recvbuf, int count, MPI_Comm comm)
 
template<typename T >
int Mpi_Allgatherv (T *sendbuf, int sendcount, T *recvbuf, int *recvcounts, int *displs, MPI_Comm comm)
 
template<typename T >
int Mpi_Allgather (T *sendbuf, T *recvbuf, int count, MPI_Comm comm)
 
template<typename T >
int Mpi_Alltoallv_sparse (T *sendbuf, int *sendcnts, int *sdispls, T *recvbuf, int *recvcnts, int *rdispls, MPI_Comm comm)
 
template<typename T >
int Mpi_Alltoallv_dense (T *sendbuf, int *sendcnts, int *sdispls, T *recvbuf, int *recvcnts, int *rdispls, MPI_Comm comm)
 
template<typename T >
unsigned int defaultWeight (const T *a)
 
template<typename T >
int partitionW (std::vector< T > &vec, unsigned int(*getWeight)(const T *), MPI_Comm comm)
 A parallel weighted partitioning function. In our implementation, we do not pose any restriction on the input or the number of processors. This function can be used with an odd number of processors as well. Some processors can pass an empty vector as input. The relative ordering of the elements is preserved. More...
 
template<typename T >
void rankSamples (std::vector< T > &arr, std::vector< T > samples, MPI_Comm comm)
 
template<typename T >
std::vector< T > Sorted_Sample_Select (std::vector< T > &arr, unsigned int kway, std::vector< unsigned int > &min_idx, std::vector< unsigned int > &max_idx, std::vector< DendroIntL > &splitter_ranks, MPI_Comm comm)
 
template<typename T >
std::vector< T > Sorted_approx_Select (std::vector< T > &arr, unsigned int k, MPI_Comm comm)
 
template<typename T >
std::vector< std::pair< T, DendroIntL > > Sorted_approx_Select_skewed (std::vector< T > &arr, unsigned int k, MPI_Comm comm)
 new one to handle skewed distributions ... More...
 
template<typename T >
std::vector< T > GetRangeMean (std::vector< T > &arr, std::vector< unsigned int > range_min, std::vector< unsigned int > range_max, MPI_Comm comm)
 
template<typename T >
std::vector< T > GuessRangeMedian (std::vector< T > &arr, std::vector< unsigned int > range_min, std::vector< unsigned int > range_max, MPI_Comm comm)
 
template<typename T >
std::vector< T > Sorted_k_Select (std::vector< T > &arr, std::vector< unsigned int > min_idx, std::vector< unsigned int > max_idx, std::vector< DendroIntL > K, std::vector< T > guess, MPI_Comm comm)
 A parallel k-selection algorithms. More...
 
template<typename T >
int HyperQuickSort (std::vector< T > &in, std::vector< T > &out, MPI_Comm comm)
 A parallel hyper quick sort implementation. More...
 
template<typename T >
int HyperQuickSort_kway (std::vector< T > &in, std::vector< T > &out, MPI_Comm comm)
 
template<typename T >
int HyperQuickSort_cav (std::vector< T > &in, std::vector< T > &out, MPI_Comm comm)
 
template<typename T >
int HyperQuickSort (std::vector< T > &arr, MPI_Comm comm)
 
template<typename T >
int HyperQuickSort_kway (std::vector< T > &in, MPI_Comm comm)
 
template<typename T >
int HyperQuickSort_cav (std::vector< T > &in, MPI_Comm comm)
 
template<typename T >
int sampleSort (std::vector< T > &in, std::vector< T > &out, MPI_Comm comm)
 A parallel sample sort implementation. In our implementation, we do not pose any restriction on the input or the number of processors. This function can be used with an odd number of processors as well. Some processors can pass an empty vector as input. If the total number of elements in the vector (globally) is fewer than 10*p^2, where p is the number of processors, then we will use bitonic sort instead of sample sort to sort the vector. We use a paralle bitonic sort to sort the samples in the sample sort algorithm. Hence, the complexity of the algorithm is O(n/p log n/p) + O(p log p). Here, n is the global length of the vector and p is the number of processors. More...
 
template<typename T >
int sampleSort (std::vector< T > &in, MPI_Comm comm)
 
int splitComm2way (bool iAmEmpty, MPI_Comm *new_comm, MPI_Comm orig_comm)
 Splits a communication group into two, one containing processors that passed a value of 'false' for the parameter 'iAmEmpty' and the another containing processors that passed a value of 'true' for the parameter. Both the groups are sorted in the ascending order of their ranks in the old comm. More...
 
int splitComm2way (const bool *isEmptyList, MPI_Comm *new_comm, MPI_Comm orig_comm)
 Splits a communication group into two depending on the values in isEmptyList. Both the groups are sorted in the ascending order of their ranks in the old comm. All processors must call this function with the same 'isEmptyList' array. More...
 
int splitCommUsingSplittingRank (int splittingRank, MPI_Comm *new_comm, MPI_Comm orig_comm)
 
unsigned int splitCommBinary (MPI_Comm orig_comm, MPI_Comm *new_comm)
 Splits a communication group into two, the first having a power of 2 number of processors and the other having the remainder. The first group is sorted in the ascending order of their ranks in the old comm and the second group is sorted in the descending order of their ranks in the old comm. More...
 
unsigned int splitCommBinaryNoFlip (MPI_Comm orig_comm, MPI_Comm *new_comm)
 Splits a communication group into two, the first having a power of 2 number of processors and the other having the remainder. Both the groups are sorted in the ascending order of their ranks in the old comm. More...
 
template<typename T >
void MergeLists (std::vector< T > &listA, std::vector< T > &listB, int KEEP_WHAT)
 Merges lists A, and B, retaining either the low or the High in list A. More...
 
template<typename T >
void MergeSplit (std::vector< T > &local_list, int which_keys, int partner, MPI_Comm comm)
 The main operation in the parallel bitonic sort algorithm. This implements the compare-split operation. More...
 
template<typename T >
void Par_bitonic_sort_incr (std::vector< T > &local_list, int proc_set_size, MPI_Comm comm)
 
template<typename T >
void Par_bitonic_sort_decr (std::vector< T > &local_list, int proc_set_size, MPI_Comm comm)
 
template<typename T >
void Par_bitonic_merge_incr (std::vector< T > &local_list, int proc_set_size, MPI_Comm comm)
 
template<typename T >
void bitonicSort_binary (std::vector< T > &in, MPI_Comm comm)
 An implementation of parallel bitonic sort that expects the number of processors to be a power of 2. However, unlike most implementations, we do not expect the length of the vector (neither locally nor globally) to be a power of 2 or even. Moreover, each processor can call this with a different number of elements. However, we do expect that 'in' atleast has 1 element on each processor. More...
 
template<typename T >
void bitonicSort (std::vector< T > &in, MPI_Comm comm)
 An implementation of parallel bitonic sort that does not expect the number of processors to be a power of 2. In fact, the number of processors can even be odd. Moreover, we do not even expect the length of the vector (neither locally nor globally) to be a power of 2 or even. Moreover, each processor can call this with a different number of elements. However, we do expect that 'in' atleast has 1 element on each processor. This recursively calls the function bitonicSort_binary, followed by a special parallel merge. More...
 
int AdjustCommunicationPattern (std::vector< int > &send_sizes, std::vector< int > &send_partners, std::vector< int > &recv_sizes, std::vector< int > &recv_partners, MPI_Comm comm)
 

Detailed Description

Collection of Generic Parallel Functions: Sorting, Partitioning, Searching,...

Author
Hari Sundar hsund.nosp@m.ar@g.nosp@m.mail..nosp@m.com

Function Documentation

◆ AdjustCommunicationPattern()

int par::AdjustCommunicationPattern ( std::vector< int > &  send_sizes,
std::vector< int > &  send_partners,
std::vector< int > &  recv_sizes,
std::vector< int > &  recv_partners,
MPI_Comm  comm 
)

Definition at line 279 of file parUtils.cpp.

◆ bitonicSort()

template<typename T >
void par::bitonicSort ( std::vector< T > &  in,
MPI_Comm  comm 
)

An implementation of parallel bitonic sort that does not expect the number of processors to be a power of 2. In fact, the number of processors can even be odd. Moreover, we do not even expect the length of the vector (neither locally nor globally) to be a power of 2 or even. Moreover, each processor can call this with a different number of elements. However, we do expect that 'in' atleast has 1 element on each processor. This recursively calls the function bitonicSort_binary, followed by a special parallel merge.

Parameters
inthe vector to be sorted
Author
Hari Sundar
See also
bitonicSort_binary

◆ bitonicSort_binary()

template<typename T >
void par::bitonicSort_binary ( std::vector< T > &  in,
MPI_Comm  comm 
)

An implementation of parallel bitonic sort that expects the number of processors to be a power of 2. However, unlike most implementations, we do not expect the length of the vector (neither locally nor globally) to be a power of 2 or even. Moreover, each processor can call this with a different number of elements. However, we do expect that 'in' atleast has 1 element on each processor.

Parameters
inthe vector to be sorted
Author
Hari Sundar

◆ defaultWeight()

template<typename T >
unsigned int par::defaultWeight ( const T *  a)

◆ GetRangeMean()

template<typename T >
std::vector<T> par::GetRangeMean ( std::vector< T > &  arr,
std::vector< unsigned int >  range_min,
std::vector< unsigned int >  range_max,
MPI_Comm  comm 
)

◆ GuessRangeMedian()

template<typename T >
std::vector<T> par::GuessRangeMedian ( std::vector< T > &  arr,
std::vector< unsigned int >  range_min,
std::vector< unsigned int >  range_max,
MPI_Comm  comm 
)

◆ HyperQuickSort() [1/2]

template<typename T >
int par::HyperQuickSort ( std::vector< T > &  in,
std::vector< T > &  out,
MPI_Comm  comm 
)

A parallel hyper quick sort implementation.

Author
Dhairya Malhotra
Parameters
inthe input vector
outthe output vector
commthe communicator

◆ HyperQuickSort() [2/2]

template<typename T >
int par::HyperQuickSort ( std::vector< T > &  arr,
MPI_Comm  comm 
)

◆ HyperQuickSort_cav() [1/2]

template<typename T >
int par::HyperQuickSort_cav ( std::vector< T > &  in,
std::vector< T > &  out,
MPI_Comm  comm 
)

◆ HyperQuickSort_cav() [2/2]

template<typename T >
int par::HyperQuickSort_cav ( std::vector< T > &  in,
MPI_Comm  comm 
)

◆ HyperQuickSort_kway() [1/2]

template<typename T >
int par::HyperQuickSort_kway ( std::vector< T > &  in,
std::vector< T > &  out,
MPI_Comm  comm 
)

◆ HyperQuickSort_kway() [2/2]

template<typename T >
int par::HyperQuickSort_kway ( std::vector< T > &  in,
MPI_Comm  comm 
)

◆ MergeLists()

template<typename T >
void par::MergeLists ( std::vector< T > &  listA,
std::vector< T > &  listB,
int  KEEP_WHAT 
)

Merges lists A, and B, retaining either the low or the High in list A.

Author
Hari Sundar
Parameters
listAInput list, and where the output is stored.
listBSecond input list.
KEEP_WHATdetermines whether to retain the High or the low values from A and B. One of KEEP_HIGH or KEEP_LOW.

Merging the two lists when their sizes are not the same is a bit involved. The major condition that needs to be used is that all elements that are less than max(min(A), min(B)) are retained by the KEEP_LOW processor, and similarly all elements that are larger larger than min(max(A), max(B)) are retained by the KEEP_HIGH processor.

The reason for this is that, on the Keep_Low side,

max(min(A), min(B)) > min(A) > max(A-)

and similarly on the Keep_high side,

min(max(A), max(B)) < max(A) < min(A+)

which guarantees that the merged lists remain bitonic.

◆ MergeSplit()

template<typename T >
void par::MergeSplit ( std::vector< T > &  local_list,
int  which_keys,
int  partner,
MPI_Comm  comm 
)

The main operation in the parallel bitonic sort algorithm. This implements the compare-split operation.

Author
Hari Sundar
Parameters
which_keysis one of KEEP_HIGH or KEEP_LOW
partneris the processor with which to Merge and Split.
local_listthe input vector
commthe communicator

◆ Mpi_Allgather()

template<typename T >
int par::Mpi_Allgather ( T *  sendbuf,
T *  recvbuf,
int  count,
MPI_Comm  comm 
)

◆ Mpi_Allgatherv()

template<typename T >
int par::Mpi_Allgatherv ( T *  sendbuf,
int  sendcount,
T *  recvbuf,
int *  recvcounts,
int *  displs,
MPI_Comm  comm 
)

◆ Mpi_Allreduce()

template<typename T >
int par::Mpi_Allreduce ( T *  sendbuf,
T *  recvbuf,
int  count,
MPI_Op  op,
MPI_Comm  comm 
)

◆ Mpi_Alltoall()

template<typename T >
int par::Mpi_Alltoall ( T *  sendbuf,
T *  recvbuf,
int  count,
MPI_Comm  comm 
)

◆ Mpi_Alltoallv_dense()

template<typename T >
int par::Mpi_Alltoallv_dense ( T *  sendbuf,
int *  sendcnts,
int *  sdispls,
T *  recvbuf,
int *  recvcnts,
int *  rdispls,
MPI_Comm  comm 
)

◆ Mpi_Alltoallv_sparse()

template<typename T >
int par::Mpi_Alltoallv_sparse ( T *  sendbuf,
int *  sendcnts,
int *  sdispls,
T *  recvbuf,
int *  recvcnts,
int *  rdispls,
MPI_Comm  comm 
)

◆ Mpi_Bcast()

template<typename T >
int par::Mpi_Bcast ( T *  buffer,
int  count,
int  root,
MPI_Comm  comm 
)

◆ Mpi_Gather()

template<typename T >
int par::Mpi_Gather ( T *  sendBuffer,
T *  recvBuffer,
int  count,
int  root,
MPI_Comm  comm 
)

◆ Mpi_Irecv()

template<typename T >
int par::Mpi_Irecv ( T *  buf,
int  count,
int  source,
int  tag,
MPI_Comm  comm,
MPI_Request *  request 
)

◆ Mpi_Isend()

template<typename T >
int par::Mpi_Isend ( T *  buf,
int  count,
int  dest,
int  tag,
MPI_Comm  comm,
MPI_Request *  request 
)

◆ Mpi_Issend()

template<typename T >
int par::Mpi_Issend ( T *  buf,
int  count,
int  dest,
int  tag,
MPI_Comm  comm,
MPI_Request *  request 
)

◆ Mpi_Recv()

template<typename T >
int par::Mpi_Recv ( T *  buf,
int  count,
int  source,
int  tag,
MPI_Comm  comm,
MPI_Status *  status 
)

◆ Mpi_Reduce()

template<typename T >
int par::Mpi_Reduce ( T *  sendbuf,
T *  recvbuf,
int  count,
MPI_Op  op,
int  root,
MPI_Comm  comm 
)

◆ Mpi_Scan()

template<typename T >
int par::Mpi_Scan ( T *  sendbuf,
T *  recvbuf,
int  count,
MPI_Op  op,
MPI_Comm  comm 
)

◆ Mpi_Sendrecv()

template<typename T , typename S >
int par::Mpi_Sendrecv ( T *  sendBuf,
int  sendCount,
int  dest,
int  sendTag,
S *  recvBuf,
int  recvCount,
int  source,
int  recvTag,
MPI_Comm  comm,
MPI_Status *  status 
)

◆ Par_bitonic_merge_incr()

template<typename T >
void par::Par_bitonic_merge_incr ( std::vector< T > &  local_list,
int  proc_set_size,
MPI_Comm  comm 
)
Author
Hari Sundar

◆ Par_bitonic_sort_decr()

template<typename T >
void par::Par_bitonic_sort_decr ( std::vector< T > &  local_list,
int  proc_set_size,
MPI_Comm  comm 
)
Author
Hari Sundar

◆ Par_bitonic_sort_incr()

template<typename T >
void par::Par_bitonic_sort_incr ( std::vector< T > &  local_list,
int  proc_set_size,
MPI_Comm  comm 
)
Author
Hari Sundar

◆ partitionW()

template<typename T >
int par::partitionW ( std::vector< T > &  vec,
unsigned int(*)(const T *)  getWeight,
MPI_Comm  comm 
)

A parallel weighted partitioning function. In our implementation, we do not pose any restriction on the input or the number of processors. This function can be used with an odd number of processors as well. Some processors can pass an empty vector as input. The relative ordering of the elements is preserved.

Author
Hari Sundar
Rahul Sampath
Parameters
vecthe input vector
getWeightfunction pointer to compute the weight of each element. If you pass NULL, then every element will get a weight equal to 1.
commthe communicator

◆ rankSamples()

template<typename T >
void par::rankSamples ( std::vector< T > &  arr,
std::vector< T >  samples,
MPI_Comm  comm 
)

◆ sampleSort() [1/2]

template<typename T >
int par::sampleSort ( std::vector< T > &  in,
std::vector< T > &  out,
MPI_Comm  comm 
)

A parallel sample sort implementation. In our implementation, we do not pose any restriction on the input or the number of processors. This function can be used with an odd number of processors as well. Some processors can pass an empty vector as input. If the total number of elements in the vector (globally) is fewer than 10*p^2, where p is the number of processors, then we will use bitonic sort instead of sample sort to sort the vector. We use a paralle bitonic sort to sort the samples in the sample sort algorithm. Hence, the complexity of the algorithm is O(n/p log n/p) + O(p log p). Here, n is the global length of the vector and p is the number of processors.

Author
Hari Sundar
Parameters
inthe input vector
outthe output vector
commthe communicator

◆ sampleSort() [2/2]

template<typename T >
int par::sampleSort ( std::vector< T > &  in,
MPI_Comm  comm 
)

◆ Sorted_approx_Select()

template<typename T >
std::vector<T> par::Sorted_approx_Select ( std::vector< T > &  arr,
unsigned int  k,
MPI_Comm  comm 
)

◆ Sorted_approx_Select_skewed()

template<typename T >
std::vector<std::pair<T, DendroIntL> > par::Sorted_approx_Select_skewed ( std::vector< T > &  arr,
unsigned int  k,
MPI_Comm  comm 
)

new one to handle skewed distributions ...

◆ Sorted_k_Select()

template<typename T >
std::vector<T> par::Sorted_k_Select ( std::vector< T > &  arr,
std::vector< unsigned int >  min_idx,
std::vector< unsigned int >  max_idx,
std::vector< DendroIntL K,
std::vector< T >  guess,
MPI_Comm  comm 
)

A parallel k-selection algorithms.

Author
Hari Sundar
Date
2013-01-10
Parameters
arrarr from which samples are to be selected
knumber of samples
isSortedif false, arr is locally sorted first.
Returns
list of k keys.

◆ Sorted_Sample_Select()

template<typename T >
std::vector<T> par::Sorted_Sample_Select ( std::vector< T > &  arr,
unsigned int  kway,
std::vector< unsigned int > &  min_idx,
std::vector< unsigned int > &  max_idx,
std::vector< DendroIntL > &  splitter_ranks,
MPI_Comm  comm 
)

◆ splitComm2way() [1/2]

int par::splitComm2way ( bool  iAmEmpty,
MPI_Comm *  new_comm,
MPI_Comm  orig_comm 
)

Splits a communication group into two, one containing processors that passed a value of 'false' for the parameter 'iAmEmpty' and the another containing processors that passed a value of 'true' for the parameter. Both the groups are sorted in the ascending order of their ranks in the old comm.

Author
Rahul Sampath
Parameters
iAmEmptySome flag to determine which group the calling processor will be combined into.
orig_commThe comm group that needs to be split.
new_commThe new comm group.

Definition at line 120 of file parUtils.cpp.

◆ splitComm2way() [2/2]

int par::splitComm2way ( const bool *  isEmptyList,
MPI_Comm *  new_comm,
MPI_Comm  orig_comm 
)

Splits a communication group into two depending on the values in isEmptyList. Both the groups are sorted in the ascending order of their ranks in the old comm. All processors must call this function with the same 'isEmptyList' array.

Author
Rahul Sampath
Parameters
isEmptyListflags (of length equal to the number of processors) to determine whether each processor is active or not.
orig_commThe comm group that needs to be split.
new_commThe new comm group.

Definition at line 225 of file parUtils.cpp.

◆ splitCommBinary()

unsigned int par::splitCommBinary ( MPI_Comm  orig_comm,
MPI_Comm *  new_comm 
)

Splits a communication group into two, the first having a power of 2 number of processors and the other having the remainder. The first group is sorted in the ascending order of their ranks in the old comm and the second group is sorted in the descending order of their ranks in the old comm.

Author
Hari Sundar
Parameters
orig_commThe comm group that needs to be split.
new_commThe new comm group.

Definition at line 21 of file parUtils.cpp.

◆ splitCommBinaryNoFlip()

unsigned int par::splitCommBinaryNoFlip ( MPI_Comm  orig_comm,
MPI_Comm *  new_comm 
)

Splits a communication group into two, the first having a power of 2 number of processors and the other having the remainder. Both the groups are sorted in the ascending order of their ranks in the old comm.

Author
Hari Sundar
Parameters
orig_commThe comm group that needs to be split.
new_commThe new comm group.

Definition at line 70 of file parUtils.cpp.

◆ splitCommUsingSplittingRank()

int par::splitCommUsingSplittingRank ( int  splittingRank,
MPI_Comm *  new_comm,
MPI_Comm  orig_comm 
)

Definition at line 180 of file parUtils.cpp.